Test Report: KVM_Linux_crio 17486

                    
                      90bfaeb6484f3951039c439350045b001b754599:2023-11-01:31693
                    
                

Test fail (28/292)

Order failed test Duration
28 TestAddons/parallel/Ingress 174.49
41 TestAddons/StoppedEnableDisable 155.35
130 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.27
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 4.67
157 TestIngressAddonLegacy/serial/ValidateIngressAddons 177.71
205 TestMultiNode/serial/PingHostFrom2Pods 3.34
211 TestMultiNode/serial/RestartKeepsNodes 685.63
213 TestMultiNode/serial/StopMultiNode 142.88
220 TestPreload 281.75
226 TestRunningBinaryUpgrade 205.48
251 TestStoppedBinaryUpgrade/Upgrade 302.3
263 TestPause/serial/SecondStartNoReconfiguration 38.47
320 TestStartStop/group/no-preload/serial/Stop 139.66
323 TestStartStop/group/embed-certs/serial/Stop 139.48
325 TestStartStop/group/old-k8s-version/serial/Stop 139.92
328 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.56
329 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.41
330 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.42
331 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 12.42
335 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
337 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 543.26
338 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 543.25
339 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 543.16
340 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.08
341 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 404.13
342 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 363.86
343 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 309.05
344 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 225.69
x
+
TestAddons/parallel/Ingress (174.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-798361 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context addons-798361 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.579954495s)
addons_test.go:231: (dbg) Run:  kubectl --context addons-798361 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:231: (dbg) Non-zero exit: kubectl --context addons-798361 replace --force -f testdata/nginx-ingress-v1.yaml: exit status 1 (179.804184ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": dial tcp 10.104.167.183:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:231: (dbg) Run:  kubectl --context addons-798361 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-798361 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [95a3b560-345b-4ce8-aecb-2b42ff0e1ca2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [95a3b560-345b-4ce8-aecb-2b42ff0e1ca2] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 18.021164759s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-798361 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-798361 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.984270004s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context addons-798361 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:285: (dbg) Done: kubectl --context addons-798361 replace --force -f testdata/ingress-dns-example-v1.yaml: (1.003619062s)
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-798361 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.214
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-798361 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p addons-798361 addons disable ingress-dns --alsologtostderr -v=1: (1.135875699s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-798361 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-798361 addons disable ingress --alsologtostderr -v=1: (7.806467613s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-798361 -n addons-798361
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-798361 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-798361 logs -n 25: (1.424353875s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-319582 | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:44 UTC |                     |
	|         | -p download-only-319582                                                                     |                      |         |                |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                                                                |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |                |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |                |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:44 UTC | 31 Oct 23 23:44 UTC |
	| delete  | -p download-only-319582                                                                     | download-only-319582 | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:44 UTC | 31 Oct 23 23:44 UTC |
	| delete  | -p download-only-319582                                                                     | download-only-319582 | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:44 UTC | 31 Oct 23 23:44 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-737072 | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:44 UTC |                     |
	|         | binary-mirror-737072                                                                        |                      |         |                |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |                |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |                |                     |                     |
	|         | http://127.0.0.1:42933                                                                      |                      |         |                |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |                |                     |                     |
	| delete  | -p binary-mirror-737072                                                                     | binary-mirror-737072 | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:44 UTC | 31 Oct 23 23:44 UTC |
	| addons  | disable dashboard -p                                                                        | addons-798361        | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:44 UTC |                     |
	|         | addons-798361                                                                               |                      |         |                |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-798361        | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:44 UTC |                     |
	|         | addons-798361                                                                               |                      |         |                |                     |                     |
	| start   | -p addons-798361 --wait=true                                                                | addons-798361        | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:44 UTC | 31 Oct 23 23:48 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |                |                     |                     |
	|         | --addons=registry                                                                           |                      |         |                |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |                |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |                |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |                |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |                |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |                |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |                |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |                |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |                |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |                |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-798361        | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:48 UTC | 31 Oct 23 23:48 UTC |
	|         | addons-798361                                                                               |                      |         |                |                     |                     |
	| addons  | addons-798361 addons                                                                        | addons-798361        | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:48 UTC | 31 Oct 23 23:48 UTC |
	|         | disable metrics-server                                                                      |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| ip      | addons-798361 ip                                                                            | addons-798361        | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:48 UTC | 31 Oct 23 23:48 UTC |
	| addons  | addons-798361 addons disable                                                                | addons-798361        | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:48 UTC | 31 Oct 23 23:48 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-798361        | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:48 UTC | 31 Oct 23 23:48 UTC |
	|         | -p addons-798361                                                                            |                      |         |                |                     |                     |
	| addons  | addons-798361 addons disable                                                                | addons-798361        | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:48 UTC | 31 Oct 23 23:48 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| ssh     | addons-798361 ssh curl -s                                                                   | addons-798361        | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:48 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |                |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |                |                     |                     |
	| ssh     | addons-798361 ssh cat                                                                       | addons-798361        | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:48 UTC | 31 Oct 23 23:48 UTC |
	|         | /opt/local-path-provisioner/pvc-e5f2c674-b386-4f07-bc7b-156e081994b8_default_test-pvc/file1 |                      |         |                |                     |                     |
	| addons  | addons-798361 addons disable                                                                | addons-798361        | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:48 UTC | 31 Oct 23 23:48 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | enable headlamp                                                                             | addons-798361        | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:48 UTC | 31 Oct 23 23:48 UTC |
	|         | -p addons-798361                                                                            |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-798361        | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:48 UTC | 31 Oct 23 23:48 UTC |
	|         | addons-798361                                                                               |                      |         |                |                     |                     |
	| addons  | addons-798361 addons                                                                        | addons-798361        | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:48 UTC | 31 Oct 23 23:48 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | addons-798361 addons                                                                        | addons-798361        | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:48 UTC | 31 Oct 23 23:48 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| ip      | addons-798361 ip                                                                            | addons-798361        | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:50 UTC | 31 Oct 23 23:50 UTC |
	| addons  | addons-798361 addons disable                                                                | addons-798361        | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:50 UTC | 31 Oct 23 23:50 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| addons  | addons-798361 addons disable                                                                | addons-798361        | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:50 UTC | 31 Oct 23 23:51 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |                |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/31 23:44:27
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1031 23:44:27.444513   14921 out.go:296] Setting OutFile to fd 1 ...
	I1031 23:44:27.444642   14921 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 23:44:27.444649   14921 out.go:309] Setting ErrFile to fd 2...
	I1031 23:44:27.444655   14921 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 23:44:27.444876   14921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7305/.minikube/bin
	I1031 23:44:27.445511   14921 out.go:303] Setting JSON to false
	I1031 23:44:27.446396   14921 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1613,"bootTime":1698794255,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 23:44:27.446462   14921 start.go:138] virtualization: kvm guest
	I1031 23:44:27.479092   14921 out.go:177] * [addons-798361] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1031 23:44:27.606724   14921 out.go:177]   - MINIKUBE_LOCATION=17486
	I1031 23:44:27.553786   14921 notify.go:220] Checking for updates...
	I1031 23:44:27.761231   14921 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 23:44:27.895696   14921 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1031 23:44:28.022408   14921 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7305/.minikube
	I1031 23:44:28.150363   14921 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 23:44:28.253160   14921 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1031 23:44:28.324715   14921 driver.go:378] Setting default libvirt URI to qemu:///system
	I1031 23:44:28.409404   14921 out.go:177] * Using the kvm2 driver based on user configuration
	I1031 23:44:28.429183   14921 start.go:298] selected driver: kvm2
	I1031 23:44:28.429210   14921 start.go:902] validating driver "kvm2" against <nil>
	I1031 23:44:28.429222   14921 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 23:44:28.429980   14921 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 23:44:28.430074   14921 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17486-7305/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1031 23:44:28.445062   14921 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1031 23:44:28.445109   14921 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1031 23:44:28.445320   14921 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1031 23:44:28.445404   14921 cni.go:84] Creating CNI manager for ""
	I1031 23:44:28.445424   14921 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 23:44:28.445440   14921 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1031 23:44:28.445452   14921 start_flags.go:323] config:
	{Name:addons-798361 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-798361 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 23:44:28.445620   14921 iso.go:125] acquiring lock: {Name:mk1f649ca0b7c1ae293cd66cb85f9eeda028b20b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 23:44:28.483173   14921 out.go:177] * Starting control plane node addons-798361 in cluster addons-798361
	I1031 23:44:28.484737   14921 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1031 23:44:28.484804   14921 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1031 23:44:28.484815   14921 cache.go:56] Caching tarball of preloaded images
	I1031 23:44:28.484912   14921 preload.go:174] Found /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1031 23:44:28.484931   14921 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1031 23:44:28.485295   14921 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/config.json ...
	I1031 23:44:28.485328   14921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/config.json: {Name:mk7e4aebc1dfc2e449402cbd1974c942bc27ccb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 23:44:28.485506   14921 start.go:365] acquiring machines lock for addons-798361: {Name:mk7aad88408c319111b9be8e59d9593a9e88374b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 23:44:28.485565   14921 start.go:369] acquired machines lock for "addons-798361" in 42.735µs
	I1031 23:44:28.485591   14921 start.go:93] Provisioning new machine with config: &{Name:addons-798361 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.3 ClusterName:addons-798361 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1031 23:44:28.485668   14921 start.go:125] createHost starting for "" (driver="kvm2")
	I1031 23:44:28.487654   14921 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1031 23:44:28.487815   14921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:44:28.487881   14921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:44:28.502701   14921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42309
	I1031 23:44:28.503136   14921 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:44:28.503765   14921 main.go:141] libmachine: Using API Version  1
	I1031 23:44:28.503784   14921 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:44:28.504139   14921 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:44:28.504382   14921 main.go:141] libmachine: (addons-798361) Calling .GetMachineName
	I1031 23:44:28.504520   14921 main.go:141] libmachine: (addons-798361) Calling .DriverName
	I1031 23:44:28.504670   14921 start.go:159] libmachine.API.Create for "addons-798361" (driver="kvm2")
	I1031 23:44:28.504700   14921 client.go:168] LocalClient.Create starting
	I1031 23:44:28.504745   14921 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem
	I1031 23:44:28.566143   14921 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem
	I1031 23:44:28.767322   14921 main.go:141] libmachine: Running pre-create checks...
	I1031 23:44:28.767355   14921 main.go:141] libmachine: (addons-798361) Calling .PreCreateCheck
	I1031 23:44:28.767910   14921 main.go:141] libmachine: (addons-798361) Calling .GetConfigRaw
	I1031 23:44:28.768488   14921 main.go:141] libmachine: Creating machine...
	I1031 23:44:28.768506   14921 main.go:141] libmachine: (addons-798361) Calling .Create
	I1031 23:44:28.768677   14921 main.go:141] libmachine: (addons-798361) Creating KVM machine...
	I1031 23:44:28.769942   14921 main.go:141] libmachine: (addons-798361) DBG | found existing default KVM network
	I1031 23:44:28.770654   14921 main.go:141] libmachine: (addons-798361) DBG | I1031 23:44:28.770507   14944 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000147900}
	I1031 23:44:28.776419   14921 main.go:141] libmachine: (addons-798361) DBG | trying to create private KVM network mk-addons-798361 192.168.39.0/24...
	I1031 23:44:28.849019   14921 main.go:141] libmachine: (addons-798361) DBG | private KVM network mk-addons-798361 192.168.39.0/24 created
	I1031 23:44:28.849053   14921 main.go:141] libmachine: (addons-798361) Setting up store path in /home/jenkins/minikube-integration/17486-7305/.minikube/machines/addons-798361 ...
	I1031 23:44:28.849066   14921 main.go:141] libmachine: (addons-798361) DBG | I1031 23:44:28.848991   14944 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17486-7305/.minikube
	I1031 23:44:28.849080   14921 main.go:141] libmachine: (addons-798361) Building disk image from file:///home/jenkins/minikube-integration/17486-7305/.minikube/cache/iso/amd64/minikube-v1.32.0-1698773592-17486-amd64.iso
	I1031 23:44:28.849133   14921 main.go:141] libmachine: (addons-798361) Downloading /home/jenkins/minikube-integration/17486-7305/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17486-7305/.minikube/cache/iso/amd64/minikube-v1.32.0-1698773592-17486-amd64.iso...
	I1031 23:44:29.079139   14921 main.go:141] libmachine: (addons-798361) DBG | I1031 23:44:29.079002   14944 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/addons-798361/id_rsa...
	I1031 23:44:29.188105   14921 main.go:141] libmachine: (addons-798361) DBG | I1031 23:44:29.187981   14944 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/addons-798361/addons-798361.rawdisk...
	I1031 23:44:29.188142   14921 main.go:141] libmachine: (addons-798361) DBG | Writing magic tar header
	I1031 23:44:29.188152   14921 main.go:141] libmachine: (addons-798361) DBG | Writing SSH key tar header
	I1031 23:44:29.188160   14921 main.go:141] libmachine: (addons-798361) DBG | I1031 23:44:29.188084   14944 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17486-7305/.minikube/machines/addons-798361 ...
	I1031 23:44:29.188199   14921 main.go:141] libmachine: (addons-798361) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/addons-798361
	I1031 23:44:29.188274   14921 main.go:141] libmachine: (addons-798361) Setting executable bit set on /home/jenkins/minikube-integration/17486-7305/.minikube/machines/addons-798361 (perms=drwx------)
	I1031 23:44:29.188303   14921 main.go:141] libmachine: (addons-798361) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17486-7305/.minikube/machines
	I1031 23:44:29.188319   14921 main.go:141] libmachine: (addons-798361) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17486-7305/.minikube
	I1031 23:44:29.188345   14921 main.go:141] libmachine: (addons-798361) Setting executable bit set on /home/jenkins/minikube-integration/17486-7305/.minikube/machines (perms=drwxr-xr-x)
	I1031 23:44:29.188361   14921 main.go:141] libmachine: (addons-798361) Setting executable bit set on /home/jenkins/minikube-integration/17486-7305/.minikube (perms=drwxr-xr-x)
	I1031 23:44:29.188367   14921 main.go:141] libmachine: (addons-798361) Setting executable bit set on /home/jenkins/minikube-integration/17486-7305 (perms=drwxrwxr-x)
	I1031 23:44:29.188376   14921 main.go:141] libmachine: (addons-798361) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1031 23:44:29.188392   14921 main.go:141] libmachine: (addons-798361) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1031 23:44:29.188407   14921 main.go:141] libmachine: (addons-798361) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17486-7305
	I1031 23:44:29.188418   14921 main.go:141] libmachine: (addons-798361) Creating domain...
	I1031 23:44:29.188435   14921 main.go:141] libmachine: (addons-798361) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1031 23:44:29.188445   14921 main.go:141] libmachine: (addons-798361) DBG | Checking permissions on dir: /home/jenkins
	I1031 23:44:29.188460   14921 main.go:141] libmachine: (addons-798361) DBG | Checking permissions on dir: /home
	I1031 23:44:29.188471   14921 main.go:141] libmachine: (addons-798361) DBG | Skipping /home - not owner
	I1031 23:44:29.189366   14921 main.go:141] libmachine: (addons-798361) define libvirt domain using xml: 
	I1031 23:44:29.189380   14921 main.go:141] libmachine: (addons-798361) <domain type='kvm'>
	I1031 23:44:29.189386   14921 main.go:141] libmachine: (addons-798361)   <name>addons-798361</name>
	I1031 23:44:29.189399   14921 main.go:141] libmachine: (addons-798361)   <memory unit='MiB'>4000</memory>
	I1031 23:44:29.189410   14921 main.go:141] libmachine: (addons-798361)   <vcpu>2</vcpu>
	I1031 23:44:29.189425   14921 main.go:141] libmachine: (addons-798361)   <features>
	I1031 23:44:29.189446   14921 main.go:141] libmachine: (addons-798361)     <acpi/>
	I1031 23:44:29.189451   14921 main.go:141] libmachine: (addons-798361)     <apic/>
	I1031 23:44:29.189457   14921 main.go:141] libmachine: (addons-798361)     <pae/>
	I1031 23:44:29.189462   14921 main.go:141] libmachine: (addons-798361)     
	I1031 23:44:29.189475   14921 main.go:141] libmachine: (addons-798361)   </features>
	I1031 23:44:29.189494   14921 main.go:141] libmachine: (addons-798361)   <cpu mode='host-passthrough'>
	I1031 23:44:29.189507   14921 main.go:141] libmachine: (addons-798361)   
	I1031 23:44:29.189520   14921 main.go:141] libmachine: (addons-798361)   </cpu>
	I1031 23:44:29.189546   14921 main.go:141] libmachine: (addons-798361)   <os>
	I1031 23:44:29.189552   14921 main.go:141] libmachine: (addons-798361)     <type>hvm</type>
	I1031 23:44:29.189565   14921 main.go:141] libmachine: (addons-798361)     <boot dev='cdrom'/>
	I1031 23:44:29.189574   14921 main.go:141] libmachine: (addons-798361)     <boot dev='hd'/>
	I1031 23:44:29.189579   14921 main.go:141] libmachine: (addons-798361)     <bootmenu enable='no'/>
	I1031 23:44:29.189588   14921 main.go:141] libmachine: (addons-798361)   </os>
	I1031 23:44:29.189595   14921 main.go:141] libmachine: (addons-798361)   <devices>
	I1031 23:44:29.189602   14921 main.go:141] libmachine: (addons-798361)     <disk type='file' device='cdrom'>
	I1031 23:44:29.189612   14921 main.go:141] libmachine: (addons-798361)       <source file='/home/jenkins/minikube-integration/17486-7305/.minikube/machines/addons-798361/boot2docker.iso'/>
	I1031 23:44:29.189625   14921 main.go:141] libmachine: (addons-798361)       <target dev='hdc' bus='scsi'/>
	I1031 23:44:29.189630   14921 main.go:141] libmachine: (addons-798361)       <readonly/>
	I1031 23:44:29.189653   14921 main.go:141] libmachine: (addons-798361)     </disk>
	I1031 23:44:29.189677   14921 main.go:141] libmachine: (addons-798361)     <disk type='file' device='disk'>
	I1031 23:44:29.189689   14921 main.go:141] libmachine: (addons-798361)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1031 23:44:29.189700   14921 main.go:141] libmachine: (addons-798361)       <source file='/home/jenkins/minikube-integration/17486-7305/.minikube/machines/addons-798361/addons-798361.rawdisk'/>
	I1031 23:44:29.189710   14921 main.go:141] libmachine: (addons-798361)       <target dev='hda' bus='virtio'/>
	I1031 23:44:29.189718   14921 main.go:141] libmachine: (addons-798361)     </disk>
	I1031 23:44:29.189725   14921 main.go:141] libmachine: (addons-798361)     <interface type='network'>
	I1031 23:44:29.189733   14921 main.go:141] libmachine: (addons-798361)       <source network='mk-addons-798361'/>
	I1031 23:44:29.189739   14921 main.go:141] libmachine: (addons-798361)       <model type='virtio'/>
	I1031 23:44:29.189747   14921 main.go:141] libmachine: (addons-798361)     </interface>
	I1031 23:44:29.189770   14921 main.go:141] libmachine: (addons-798361)     <interface type='network'>
	I1031 23:44:29.189794   14921 main.go:141] libmachine: (addons-798361)       <source network='default'/>
	I1031 23:44:29.189810   14921 main.go:141] libmachine: (addons-798361)       <model type='virtio'/>
	I1031 23:44:29.189822   14921 main.go:141] libmachine: (addons-798361)     </interface>
	I1031 23:44:29.189837   14921 main.go:141] libmachine: (addons-798361)     <serial type='pty'>
	I1031 23:44:29.189849   14921 main.go:141] libmachine: (addons-798361)       <target port='0'/>
	I1031 23:44:29.189864   14921 main.go:141] libmachine: (addons-798361)     </serial>
	I1031 23:44:29.189884   14921 main.go:141] libmachine: (addons-798361)     <console type='pty'>
	I1031 23:44:29.189899   14921 main.go:141] libmachine: (addons-798361)       <target type='serial' port='0'/>
	I1031 23:44:29.189907   14921 main.go:141] libmachine: (addons-798361)     </console>
	I1031 23:44:29.189918   14921 main.go:141] libmachine: (addons-798361)     <rng model='virtio'>
	I1031 23:44:29.189929   14921 main.go:141] libmachine: (addons-798361)       <backend model='random'>/dev/random</backend>
	I1031 23:44:29.189942   14921 main.go:141] libmachine: (addons-798361)     </rng>
	I1031 23:44:29.189957   14921 main.go:141] libmachine: (addons-798361)     
	I1031 23:44:29.189973   14921 main.go:141] libmachine: (addons-798361)     
	I1031 23:44:29.189985   14921 main.go:141] libmachine: (addons-798361)   </devices>
	I1031 23:44:29.189994   14921 main.go:141] libmachine: (addons-798361) </domain>
	I1031 23:44:29.190005   14921 main.go:141] libmachine: (addons-798361) 
	I1031 23:44:29.196085   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:b4:1c:62 in network default
	I1031 23:44:29.196575   14921 main.go:141] libmachine: (addons-798361) Ensuring networks are active...
	I1031 23:44:29.196596   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:29.197245   14921 main.go:141] libmachine: (addons-798361) Ensuring network default is active
	I1031 23:44:29.197577   14921 main.go:141] libmachine: (addons-798361) Ensuring network mk-addons-798361 is active
	I1031 23:44:29.198026   14921 main.go:141] libmachine: (addons-798361) Getting domain xml...
	I1031 23:44:29.198620   14921 main.go:141] libmachine: (addons-798361) Creating domain...
	I1031 23:44:30.665969   14921 main.go:141] libmachine: (addons-798361) Waiting to get IP...
	I1031 23:44:30.666761   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:30.667388   14921 main.go:141] libmachine: (addons-798361) DBG | unable to find current IP address of domain addons-798361 in network mk-addons-798361
	I1031 23:44:30.667433   14921 main.go:141] libmachine: (addons-798361) DBG | I1031 23:44:30.667355   14944 retry.go:31] will retry after 200.736915ms: waiting for machine to come up
	I1031 23:44:30.869780   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:30.870314   14921 main.go:141] libmachine: (addons-798361) DBG | unable to find current IP address of domain addons-798361 in network mk-addons-798361
	I1031 23:44:30.870346   14921 main.go:141] libmachine: (addons-798361) DBG | I1031 23:44:30.870276   14944 retry.go:31] will retry after 251.612088ms: waiting for machine to come up
	I1031 23:44:31.123674   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:31.124098   14921 main.go:141] libmachine: (addons-798361) DBG | unable to find current IP address of domain addons-798361 in network mk-addons-798361
	I1031 23:44:31.124122   14921 main.go:141] libmachine: (addons-798361) DBG | I1031 23:44:31.124065   14944 retry.go:31] will retry after 405.419784ms: waiting for machine to come up
	I1031 23:44:31.530558   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:31.530863   14921 main.go:141] libmachine: (addons-798361) DBG | unable to find current IP address of domain addons-798361 in network mk-addons-798361
	I1031 23:44:31.530901   14921 main.go:141] libmachine: (addons-798361) DBG | I1031 23:44:31.530827   14944 retry.go:31] will retry after 610.114721ms: waiting for machine to come up
	I1031 23:44:32.142558   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:32.142946   14921 main.go:141] libmachine: (addons-798361) DBG | unable to find current IP address of domain addons-798361 in network mk-addons-798361
	I1031 23:44:32.142969   14921 main.go:141] libmachine: (addons-798361) DBG | I1031 23:44:32.142891   14944 retry.go:31] will retry after 492.170927ms: waiting for machine to come up
	I1031 23:44:32.636571   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:32.636977   14921 main.go:141] libmachine: (addons-798361) DBG | unable to find current IP address of domain addons-798361 in network mk-addons-798361
	I1031 23:44:32.637017   14921 main.go:141] libmachine: (addons-798361) DBG | I1031 23:44:32.636923   14944 retry.go:31] will retry after 846.912814ms: waiting for machine to come up
	I1031 23:44:33.485086   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:33.485394   14921 main.go:141] libmachine: (addons-798361) DBG | unable to find current IP address of domain addons-798361 in network mk-addons-798361
	I1031 23:44:33.485426   14921 main.go:141] libmachine: (addons-798361) DBG | I1031 23:44:33.485364   14944 retry.go:31] will retry after 766.906347ms: waiting for machine to come up
	I1031 23:44:34.254344   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:34.254740   14921 main.go:141] libmachine: (addons-798361) DBG | unable to find current IP address of domain addons-798361 in network mk-addons-798361
	I1031 23:44:34.254766   14921 main.go:141] libmachine: (addons-798361) DBG | I1031 23:44:34.254681   14944 retry.go:31] will retry after 1.122601751s: waiting for machine to come up
	I1031 23:44:35.379070   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:35.379475   14921 main.go:141] libmachine: (addons-798361) DBG | unable to find current IP address of domain addons-798361 in network mk-addons-798361
	I1031 23:44:35.379499   14921 main.go:141] libmachine: (addons-798361) DBG | I1031 23:44:35.379453   14944 retry.go:31] will retry after 1.714347637s: waiting for machine to come up
	I1031 23:44:37.095144   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:37.095542   14921 main.go:141] libmachine: (addons-798361) DBG | unable to find current IP address of domain addons-798361 in network mk-addons-798361
	I1031 23:44:37.095570   14921 main.go:141] libmachine: (addons-798361) DBG | I1031 23:44:37.095492   14944 retry.go:31] will retry after 2.014321469s: waiting for machine to come up
	I1031 23:44:39.111231   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:39.111654   14921 main.go:141] libmachine: (addons-798361) DBG | unable to find current IP address of domain addons-798361 in network mk-addons-798361
	I1031 23:44:39.111685   14921 main.go:141] libmachine: (addons-798361) DBG | I1031 23:44:39.111563   14944 retry.go:31] will retry after 2.36406115s: waiting for machine to come up
	I1031 23:44:41.479179   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:41.479564   14921 main.go:141] libmachine: (addons-798361) DBG | unable to find current IP address of domain addons-798361 in network mk-addons-798361
	I1031 23:44:41.479605   14921 main.go:141] libmachine: (addons-798361) DBG | I1031 23:44:41.479503   14944 retry.go:31] will retry after 3.609999589s: waiting for machine to come up
	I1031 23:44:45.093702   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:45.094424   14921 main.go:141] libmachine: (addons-798361) DBG | unable to find current IP address of domain addons-798361 in network mk-addons-798361
	I1031 23:44:45.094445   14921 main.go:141] libmachine: (addons-798361) DBG | I1031 23:44:45.094363   14944 retry.go:31] will retry after 4.303010731s: waiting for machine to come up
	I1031 23:44:49.398599   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:49.399081   14921 main.go:141] libmachine: (addons-798361) Found IP for machine: 192.168.39.214
	I1031 23:44:49.399119   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has current primary IP address 192.168.39.214 and MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:49.399133   14921 main.go:141] libmachine: (addons-798361) Reserving static IP address...
	I1031 23:44:49.399440   14921 main.go:141] libmachine: (addons-798361) DBG | unable to find host DHCP lease matching {name: "addons-798361", mac: "52:54:00:0f:da:27", ip: "192.168.39.214"} in network mk-addons-798361
	I1031 23:44:49.470844   14921 main.go:141] libmachine: (addons-798361) DBG | Getting to WaitForSSH function...
	I1031 23:44:49.470873   14921 main.go:141] libmachine: (addons-798361) Reserved static IP address: 192.168.39.214
	I1031 23:44:49.470887   14921 main.go:141] libmachine: (addons-798361) Waiting for SSH to be available...
	I1031 23:44:49.473459   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:49.473756   14921 main.go:141] libmachine: (addons-798361) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:0f:da:27", ip: ""} in network mk-addons-798361
	I1031 23:44:49.473793   14921 main.go:141] libmachine: (addons-798361) DBG | unable to find defined IP address of network mk-addons-798361 interface with MAC address 52:54:00:0f:da:27
	I1031 23:44:49.473979   14921 main.go:141] libmachine: (addons-798361) DBG | Using SSH client type: external
	I1031 23:44:49.474001   14921 main.go:141] libmachine: (addons-798361) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/addons-798361/id_rsa (-rw-------)
	I1031 23:44:49.474027   14921 main.go:141] libmachine: (addons-798361) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7305/.minikube/machines/addons-798361/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 23:44:49.474044   14921 main.go:141] libmachine: (addons-798361) DBG | About to run SSH command:
	I1031 23:44:49.474075   14921 main.go:141] libmachine: (addons-798361) DBG | exit 0
	I1031 23:44:49.485727   14921 main.go:141] libmachine: (addons-798361) DBG | SSH cmd err, output: exit status 255: 
	I1031 23:44:49.485756   14921 main.go:141] libmachine: (addons-798361) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1031 23:44:49.485767   14921 main.go:141] libmachine: (addons-798361) DBG | command : exit 0
	I1031 23:44:49.485775   14921 main.go:141] libmachine: (addons-798361) DBG | err     : exit status 255
	I1031 23:44:49.485817   14921 main.go:141] libmachine: (addons-798361) DBG | output  : 
	I1031 23:44:52.487820   14921 main.go:141] libmachine: (addons-798361) DBG | Getting to WaitForSSH function...
	I1031 23:44:52.490023   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:52.490325   14921 main.go:141] libmachine: (addons-798361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:da:27", ip: ""} in network mk-addons-798361: {Iface:virbr1 ExpiryTime:2023-11-01 00:44:44 +0000 UTC Type:0 Mac:52:54:00:0f:da:27 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-798361 Clientid:01:52:54:00:0f:da:27}
	I1031 23:44:52.490378   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined IP address 192.168.39.214 and MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:52.490511   14921 main.go:141] libmachine: (addons-798361) DBG | Using SSH client type: external
	I1031 23:44:52.490542   14921 main.go:141] libmachine: (addons-798361) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/addons-798361/id_rsa (-rw-------)
	I1031 23:44:52.490577   14921 main.go:141] libmachine: (addons-798361) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7305/.minikube/machines/addons-798361/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 23:44:52.490594   14921 main.go:141] libmachine: (addons-798361) DBG | About to run SSH command:
	I1031 23:44:52.490604   14921 main.go:141] libmachine: (addons-798361) DBG | exit 0
	I1031 23:44:52.575835   14921 main.go:141] libmachine: (addons-798361) DBG | SSH cmd err, output: <nil>: 
	I1031 23:44:52.576080   14921 main.go:141] libmachine: (addons-798361) KVM machine creation complete!
	I1031 23:44:52.576430   14921 main.go:141] libmachine: (addons-798361) Calling .GetConfigRaw
	I1031 23:44:52.576959   14921 main.go:141] libmachine: (addons-798361) Calling .DriverName
	I1031 23:44:52.577164   14921 main.go:141] libmachine: (addons-798361) Calling .DriverName
	I1031 23:44:52.577329   14921 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1031 23:44:52.577343   14921 main.go:141] libmachine: (addons-798361) Calling .GetState
	I1031 23:44:52.578672   14921 main.go:141] libmachine: Detecting operating system of created instance...
	I1031 23:44:52.578684   14921 main.go:141] libmachine: Waiting for SSH to be available...
	I1031 23:44:52.578691   14921 main.go:141] libmachine: Getting to WaitForSSH function...
	I1031 23:44:52.578699   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHHostname
	I1031 23:44:52.580705   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:52.581010   14921 main.go:141] libmachine: (addons-798361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:da:27", ip: ""} in network mk-addons-798361: {Iface:virbr1 ExpiryTime:2023-11-01 00:44:44 +0000 UTC Type:0 Mac:52:54:00:0f:da:27 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-798361 Clientid:01:52:54:00:0f:da:27}
	I1031 23:44:52.581043   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined IP address 192.168.39.214 and MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:52.581126   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHPort
	I1031 23:44:52.581293   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHKeyPath
	I1031 23:44:52.581442   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHKeyPath
	I1031 23:44:52.581544   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHUsername
	I1031 23:44:52.581734   14921 main.go:141] libmachine: Using SSH client type: native
	I1031 23:44:52.582044   14921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1031 23:44:52.582056   14921 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1031 23:44:52.691044   14921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 23:44:52.691075   14921 main.go:141] libmachine: Detecting the provisioner...
	I1031 23:44:52.691085   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHHostname
	I1031 23:44:52.693727   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:52.694069   14921 main.go:141] libmachine: (addons-798361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:da:27", ip: ""} in network mk-addons-798361: {Iface:virbr1 ExpiryTime:2023-11-01 00:44:44 +0000 UTC Type:0 Mac:52:54:00:0f:da:27 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-798361 Clientid:01:52:54:00:0f:da:27}
	I1031 23:44:52.694094   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined IP address 192.168.39.214 and MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:52.694259   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHPort
	I1031 23:44:52.694418   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHKeyPath
	I1031 23:44:52.694563   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHKeyPath
	I1031 23:44:52.694728   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHUsername
	I1031 23:44:52.694897   14921 main.go:141] libmachine: Using SSH client type: native
	I1031 23:44:52.695210   14921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1031 23:44:52.695222   14921 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1031 23:44:52.808568   14921 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g0cee705-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1031 23:44:52.808673   14921 main.go:141] libmachine: found compatible host: buildroot
	I1031 23:44:52.808689   14921 main.go:141] libmachine: Provisioning with buildroot...
	I1031 23:44:52.808705   14921 main.go:141] libmachine: (addons-798361) Calling .GetMachineName
	I1031 23:44:52.808955   14921 buildroot.go:166] provisioning hostname "addons-798361"
	I1031 23:44:52.808986   14921 main.go:141] libmachine: (addons-798361) Calling .GetMachineName
	I1031 23:44:52.809180   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHHostname
	I1031 23:44:52.811694   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:52.812025   14921 main.go:141] libmachine: (addons-798361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:da:27", ip: ""} in network mk-addons-798361: {Iface:virbr1 ExpiryTime:2023-11-01 00:44:44 +0000 UTC Type:0 Mac:52:54:00:0f:da:27 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-798361 Clientid:01:52:54:00:0f:da:27}
	I1031 23:44:52.812048   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined IP address 192.168.39.214 and MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:52.812249   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHPort
	I1031 23:44:52.812401   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHKeyPath
	I1031 23:44:52.812596   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHKeyPath
	I1031 23:44:52.812735   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHUsername
	I1031 23:44:52.812862   14921 main.go:141] libmachine: Using SSH client type: native
	I1031 23:44:52.813179   14921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1031 23:44:52.813195   14921 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-798361 && echo "addons-798361" | sudo tee /etc/hostname
	I1031 23:44:52.935488   14921 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-798361
	
	I1031 23:44:52.935525   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHHostname
	I1031 23:44:52.938254   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:52.938612   14921 main.go:141] libmachine: (addons-798361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:da:27", ip: ""} in network mk-addons-798361: {Iface:virbr1 ExpiryTime:2023-11-01 00:44:44 +0000 UTC Type:0 Mac:52:54:00:0f:da:27 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-798361 Clientid:01:52:54:00:0f:da:27}
	I1031 23:44:52.938643   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined IP address 192.168.39.214 and MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:52.938806   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHPort
	I1031 23:44:52.939003   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHKeyPath
	I1031 23:44:52.939155   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHKeyPath
	I1031 23:44:52.939285   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHUsername
	I1031 23:44:52.939452   14921 main.go:141] libmachine: Using SSH client type: native
	I1031 23:44:52.939756   14921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1031 23:44:52.939772   14921 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-798361' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-798361/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-798361' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 23:44:53.059713   14921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 23:44:53.059742   14921 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7305/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7305/.minikube}
	I1031 23:44:53.059765   14921 buildroot.go:174] setting up certificates
	I1031 23:44:53.059779   14921 provision.go:83] configureAuth start
	I1031 23:44:53.059791   14921 main.go:141] libmachine: (addons-798361) Calling .GetMachineName
	I1031 23:44:53.060073   14921 main.go:141] libmachine: (addons-798361) Calling .GetIP
	I1031 23:44:53.062938   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:53.063262   14921 main.go:141] libmachine: (addons-798361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:da:27", ip: ""} in network mk-addons-798361: {Iface:virbr1 ExpiryTime:2023-11-01 00:44:44 +0000 UTC Type:0 Mac:52:54:00:0f:da:27 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-798361 Clientid:01:52:54:00:0f:da:27}
	I1031 23:44:53.063300   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined IP address 192.168.39.214 and MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:53.063444   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHHostname
	I1031 23:44:53.065366   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:53.065666   14921 main.go:141] libmachine: (addons-798361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:da:27", ip: ""} in network mk-addons-798361: {Iface:virbr1 ExpiryTime:2023-11-01 00:44:44 +0000 UTC Type:0 Mac:52:54:00:0f:da:27 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-798361 Clientid:01:52:54:00:0f:da:27}
	I1031 23:44:53.065705   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined IP address 192.168.39.214 and MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:53.065853   14921 provision.go:138] copyHostCerts
	I1031 23:44:53.065915   14921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem (1078 bytes)
	I1031 23:44:53.066035   14921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem (1123 bytes)
	I1031 23:44:53.066131   14921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem (1675 bytes)
	I1031 23:44:53.066175   14921 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem org=jenkins.addons-798361 san=[192.168.39.214 192.168.39.214 localhost 127.0.0.1 minikube addons-798361]
	I1031 23:44:53.288647   14921 provision.go:172] copyRemoteCerts
	I1031 23:44:53.288715   14921 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 23:44:53.288737   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHHostname
	I1031 23:44:53.291645   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:53.292011   14921 main.go:141] libmachine: (addons-798361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:da:27", ip: ""} in network mk-addons-798361: {Iface:virbr1 ExpiryTime:2023-11-01 00:44:44 +0000 UTC Type:0 Mac:52:54:00:0f:da:27 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-798361 Clientid:01:52:54:00:0f:da:27}
	I1031 23:44:53.292053   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined IP address 192.168.39.214 and MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:53.292225   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHPort
	I1031 23:44:53.292558   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHKeyPath
	I1031 23:44:53.292755   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHUsername
	I1031 23:44:53.292955   14921 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/addons-798361/id_rsa Username:docker}
	I1031 23:44:53.378871   14921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1031 23:44:53.402577   14921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1031 23:44:53.427165   14921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1031 23:44:53.452194   14921 provision.go:86] duration metric: configureAuth took 392.400992ms
	I1031 23:44:53.452223   14921 buildroot.go:189] setting minikube options for container-runtime
	I1031 23:44:53.452393   14921 config.go:182] Loaded profile config "addons-798361": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 23:44:53.452467   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHHostname
	I1031 23:44:53.455519   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:53.455977   14921 main.go:141] libmachine: (addons-798361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:da:27", ip: ""} in network mk-addons-798361: {Iface:virbr1 ExpiryTime:2023-11-01 00:44:44 +0000 UTC Type:0 Mac:52:54:00:0f:da:27 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-798361 Clientid:01:52:54:00:0f:da:27}
	I1031 23:44:53.456008   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined IP address 192.168.39.214 and MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:53.456172   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHPort
	I1031 23:44:53.456377   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHKeyPath
	I1031 23:44:53.456539   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHKeyPath
	I1031 23:44:53.456692   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHUsername
	I1031 23:44:53.456836   14921 main.go:141] libmachine: Using SSH client type: native
	I1031 23:44:53.457139   14921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1031 23:44:53.457161   14921 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1031 23:44:53.766525   14921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1031 23:44:53.766558   14921 main.go:141] libmachine: Checking connection to Docker...
	I1031 23:44:53.766566   14921 main.go:141] libmachine: (addons-798361) Calling .GetURL
	I1031 23:44:53.767775   14921 main.go:141] libmachine: (addons-798361) DBG | Using libvirt version 6000000
	I1031 23:44:53.770385   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:53.770897   14921 main.go:141] libmachine: (addons-798361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:da:27", ip: ""} in network mk-addons-798361: {Iface:virbr1 ExpiryTime:2023-11-01 00:44:44 +0000 UTC Type:0 Mac:52:54:00:0f:da:27 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-798361 Clientid:01:52:54:00:0f:da:27}
	I1031 23:44:53.770935   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined IP address 192.168.39.214 and MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:53.771075   14921 main.go:141] libmachine: Docker is up and running!
	I1031 23:44:53.771091   14921 main.go:141] libmachine: Reticulating splines...
	I1031 23:44:53.771098   14921 client.go:171] LocalClient.Create took 25.266392502s
	I1031 23:44:53.771119   14921 start.go:167] duration metric: libmachine.API.Create for "addons-798361" took 25.266451557s
	I1031 23:44:53.771130   14921 start.go:300] post-start starting for "addons-798361" (driver="kvm2")
	I1031 23:44:53.771140   14921 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 23:44:53.771169   14921 main.go:141] libmachine: (addons-798361) Calling .DriverName
	I1031 23:44:53.771450   14921 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 23:44:53.771480   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHHostname
	I1031 23:44:53.773903   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:53.774356   14921 main.go:141] libmachine: (addons-798361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:da:27", ip: ""} in network mk-addons-798361: {Iface:virbr1 ExpiryTime:2023-11-01 00:44:44 +0000 UTC Type:0 Mac:52:54:00:0f:da:27 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-798361 Clientid:01:52:54:00:0f:da:27}
	I1031 23:44:53.774396   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined IP address 192.168.39.214 and MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:53.774556   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHPort
	I1031 23:44:53.774760   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHKeyPath
	I1031 23:44:53.774938   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHUsername
	I1031 23:44:53.775095   14921 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/addons-798361/id_rsa Username:docker}
	I1031 23:44:53.861980   14921 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 23:44:53.866432   14921 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 23:44:53.866455   14921 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/addons for local assets ...
	I1031 23:44:53.866528   14921 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/files for local assets ...
	I1031 23:44:53.866549   14921 start.go:303] post-start completed in 95.414143ms
	I1031 23:44:53.866579   14921 main.go:141] libmachine: (addons-798361) Calling .GetConfigRaw
	I1031 23:44:53.867171   14921 main.go:141] libmachine: (addons-798361) Calling .GetIP
	I1031 23:44:53.869927   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:53.870341   14921 main.go:141] libmachine: (addons-798361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:da:27", ip: ""} in network mk-addons-798361: {Iface:virbr1 ExpiryTime:2023-11-01 00:44:44 +0000 UTC Type:0 Mac:52:54:00:0f:da:27 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-798361 Clientid:01:52:54:00:0f:da:27}
	I1031 23:44:53.870363   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined IP address 192.168.39.214 and MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:53.870674   14921 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/config.json ...
	I1031 23:44:53.870868   14921 start.go:128] duration metric: createHost completed in 25.385189754s
	I1031 23:44:53.870890   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHHostname
	I1031 23:44:53.873810   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:53.874173   14921 main.go:141] libmachine: (addons-798361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:da:27", ip: ""} in network mk-addons-798361: {Iface:virbr1 ExpiryTime:2023-11-01 00:44:44 +0000 UTC Type:0 Mac:52:54:00:0f:da:27 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-798361 Clientid:01:52:54:00:0f:da:27}
	I1031 23:44:53.874220   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined IP address 192.168.39.214 and MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:53.874334   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHPort
	I1031 23:44:53.874550   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHKeyPath
	I1031 23:44:53.874708   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHKeyPath
	I1031 23:44:53.874828   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHUsername
	I1031 23:44:53.874980   14921 main.go:141] libmachine: Using SSH client type: native
	I1031 23:44:53.875295   14921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1031 23:44:53.875309   14921 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 23:44:53.988541   14921 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698795893.960627423
	
	I1031 23:44:53.988566   14921 fix.go:206] guest clock: 1698795893.960627423
	I1031 23:44:53.988576   14921 fix.go:219] Guest: 2023-10-31 23:44:53.960627423 +0000 UTC Remote: 2023-10-31 23:44:53.870879134 +0000 UTC m=+26.477444300 (delta=89.748289ms)
	I1031 23:44:53.988629   14921 fix.go:190] guest clock delta is within tolerance: 89.748289ms
	I1031 23:44:53.988641   14921 start.go:83] releasing machines lock for "addons-798361", held for 25.503062718s
	I1031 23:44:53.988668   14921 main.go:141] libmachine: (addons-798361) Calling .DriverName
	I1031 23:44:53.988927   14921 main.go:141] libmachine: (addons-798361) Calling .GetIP
	I1031 23:44:53.991669   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:53.991976   14921 main.go:141] libmachine: (addons-798361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:da:27", ip: ""} in network mk-addons-798361: {Iface:virbr1 ExpiryTime:2023-11-01 00:44:44 +0000 UTC Type:0 Mac:52:54:00:0f:da:27 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-798361 Clientid:01:52:54:00:0f:da:27}
	I1031 23:44:53.992022   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined IP address 192.168.39.214 and MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:53.992188   14921 main.go:141] libmachine: (addons-798361) Calling .DriverName
	I1031 23:44:53.992667   14921 main.go:141] libmachine: (addons-798361) Calling .DriverName
	I1031 23:44:53.992842   14921 main.go:141] libmachine: (addons-798361) Calling .DriverName
	I1031 23:44:53.992966   14921 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 23:44:53.993006   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHHostname
	I1031 23:44:53.993070   14921 ssh_runner.go:195] Run: cat /version.json
	I1031 23:44:53.993101   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHHostname
	I1031 23:44:53.995874   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:53.995971   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:53.996244   14921 main.go:141] libmachine: (addons-798361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:da:27", ip: ""} in network mk-addons-798361: {Iface:virbr1 ExpiryTime:2023-11-01 00:44:44 +0000 UTC Type:0 Mac:52:54:00:0f:da:27 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-798361 Clientid:01:52:54:00:0f:da:27}
	I1031 23:44:53.996274   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined IP address 192.168.39.214 and MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:53.996304   14921 main.go:141] libmachine: (addons-798361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:da:27", ip: ""} in network mk-addons-798361: {Iface:virbr1 ExpiryTime:2023-11-01 00:44:44 +0000 UTC Type:0 Mac:52:54:00:0f:da:27 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-798361 Clientid:01:52:54:00:0f:da:27}
	I1031 23:44:53.996325   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined IP address 192.168.39.214 and MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:53.996486   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHPort
	I1031 23:44:53.996577   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHPort
	I1031 23:44:53.996687   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHKeyPath
	I1031 23:44:53.996749   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHKeyPath
	I1031 23:44:53.996820   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHUsername
	I1031 23:44:53.996898   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHUsername
	I1031 23:44:53.996981   14921 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/addons-798361/id_rsa Username:docker}
	I1031 23:44:53.997029   14921 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/addons-798361/id_rsa Username:docker}
	I1031 23:44:54.076512   14921 ssh_runner.go:195] Run: systemctl --version
	I1031 23:44:54.116253   14921 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1031 23:44:54.275315   14921 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1031 23:44:54.281465   14921 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 23:44:54.281522   14921 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 23:44:54.295877   14921 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 23:44:54.295903   14921 start.go:472] detecting cgroup driver to use...
	I1031 23:44:54.296005   14921 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 23:44:54.309697   14921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 23:44:54.321605   14921 docker.go:204] disabling cri-docker service (if available) ...
	I1031 23:44:54.321671   14921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1031 23:44:54.333753   14921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1031 23:44:54.346056   14921 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1031 23:44:54.446509   14921 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1031 23:44:54.563029   14921 docker.go:220] disabling docker service ...
	I1031 23:44:54.563086   14921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1031 23:44:54.576634   14921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1031 23:44:54.588410   14921 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1031 23:44:54.699678   14921 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1031 23:44:54.801283   14921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1031 23:44:54.813737   14921 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 23:44:54.830420   14921 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1031 23:44:54.830473   14921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 23:44:54.839134   14921 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1031 23:44:54.839197   14921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 23:44:54.848004   14921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 23:44:54.857636   14921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 23:44:54.866899   14921 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 23:44:54.875605   14921 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 23:44:54.883157   14921 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1031 23:44:54.883215   14921 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1031 23:44:54.895202   14921 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 23:44:54.903374   14921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 23:44:55.003451   14921 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1031 23:44:55.169808   14921 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1031 23:44:55.169898   14921 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1031 23:44:55.174356   14921 start.go:540] Will wait 60s for crictl version
	I1031 23:44:55.174442   14921 ssh_runner.go:195] Run: which crictl
	I1031 23:44:55.177768   14921 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 23:44:55.213413   14921 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1031 23:44:55.213547   14921 ssh_runner.go:195] Run: crio --version
	I1031 23:44:55.260851   14921 ssh_runner.go:195] Run: crio --version
	I1031 23:44:55.309419   14921 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1031 23:44:55.310947   14921 main.go:141] libmachine: (addons-798361) Calling .GetIP
	I1031 23:44:55.313407   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:55.313687   14921 main.go:141] libmachine: (addons-798361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:da:27", ip: ""} in network mk-addons-798361: {Iface:virbr1 ExpiryTime:2023-11-01 00:44:44 +0000 UTC Type:0 Mac:52:54:00:0f:da:27 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-798361 Clientid:01:52:54:00:0f:da:27}
	I1031 23:44:55.313719   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined IP address 192.168.39.214 and MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:44:55.313892   14921 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1031 23:44:55.317831   14921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 23:44:55.330423   14921 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1031 23:44:55.330501   14921 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 23:44:55.364068   14921 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1031 23:44:55.364151   14921 ssh_runner.go:195] Run: which lz4
	I1031 23:44:55.367727   14921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1031 23:44:55.371479   14921 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1031 23:44:55.371510   14921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1031 23:44:56.980221   14921 crio.go:444] Took 1.612518 seconds to copy over tarball
	I1031 23:44:56.980291   14921 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1031 23:44:59.977732   14921 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.997417423s)
	I1031 23:44:59.977757   14921 crio.go:451] Took 2.997510 seconds to extract the tarball
	I1031 23:44:59.977768   14921 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1031 23:45:00.020429   14921 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 23:45:00.094931   14921 crio.go:496] all images are preloaded for cri-o runtime.
	I1031 23:45:00.094954   14921 cache_images.go:84] Images are preloaded, skipping loading
	I1031 23:45:00.095014   14921 ssh_runner.go:195] Run: crio config
	I1031 23:45:00.153273   14921 cni.go:84] Creating CNI manager for ""
	I1031 23:45:00.153300   14921 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 23:45:00.153326   14921 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 23:45:00.153351   14921 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.214 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-798361 NodeName:addons-798361 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1031 23:45:00.153583   14921 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-798361"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.214"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 23:45:00.153681   14921 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-798361 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:addons-798361 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 23:45:00.153753   14921 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1031 23:45:00.162302   14921 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 23:45:00.162364   14921 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 23:45:00.170375   14921 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1031 23:45:00.186226   14921 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1031 23:45:00.201889   14921 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I1031 23:45:00.218559   14921 ssh_runner.go:195] Run: grep 192.168.39.214	control-plane.minikube.internal$ /etc/hosts
	I1031 23:45:00.222284   14921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 23:45:00.234557   14921 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361 for IP: 192.168.39.214
	I1031 23:45:00.234586   14921 certs.go:190] acquiring lock for shared ca certs: {Name:mk6185b3938c4b51e80f57b7c81926c81b632b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 23:45:00.234740   14921 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key
	I1031 23:45:00.454636   14921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt ...
	I1031 23:45:00.454665   14921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt: {Name:mk618bd4e03880536245ee07a27f7cea72082bf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 23:45:00.454823   14921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key ...
	I1031 23:45:00.454835   14921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key: {Name:mk50fddb8c314387129309b3d744c5f330f92107 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 23:45:00.454901   14921 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key
	I1031 23:45:00.590074   14921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt ...
	I1031 23:45:00.590113   14921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt: {Name:mk5888f3295522fdc1f8223ef618e42504335ed8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 23:45:00.590321   14921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key ...
	I1031 23:45:00.590336   14921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key: {Name:mkad42fa7dd084eaf16cdf08e0fb3157ba9801a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 23:45:00.590478   14921 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.key
	I1031 23:45:00.590496   14921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt with IP's: []
	I1031 23:45:00.666283   14921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt ...
	I1031 23:45:00.666317   14921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt: {Name:mk234b074a36311df0674084c9aaa447460dd97c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 23:45:00.666519   14921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.key ...
	I1031 23:45:00.666535   14921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.key: {Name:mk09dbfc44a9d921c8122eba67e7f91945f69b88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 23:45:00.666637   14921 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/apiserver.key.addc3f73
	I1031 23:45:00.666659   14921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/apiserver.crt.addc3f73 with IP's: [192.168.39.214 10.96.0.1 127.0.0.1 10.0.0.1]
	I1031 23:45:00.858374   14921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/apiserver.crt.addc3f73 ...
	I1031 23:45:00.858407   14921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/apiserver.crt.addc3f73: {Name:mkf881eec97a95228db46fff1dd53d00cc3baf3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 23:45:00.858626   14921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/apiserver.key.addc3f73 ...
	I1031 23:45:00.858644   14921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/apiserver.key.addc3f73: {Name:mka6349f00947eff6a89101672ab28d54c4c60b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 23:45:00.858744   14921 certs.go:337] copying /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/apiserver.crt.addc3f73 -> /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/apiserver.crt
	I1031 23:45:00.858848   14921 certs.go:341] copying /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/apiserver.key.addc3f73 -> /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/apiserver.key
	I1031 23:45:00.858901   14921 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/proxy-client.key
	I1031 23:45:00.858920   14921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/proxy-client.crt with IP's: []
	I1031 23:45:00.994304   14921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/proxy-client.crt ...
	I1031 23:45:00.994335   14921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/proxy-client.crt: {Name:mk8e9f45b448aa586dc1324c7f9558a10cb971c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 23:45:00.994512   14921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/proxy-client.key ...
	I1031 23:45:00.994525   14921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/proxy-client.key: {Name:mka89b674b3361dbcb3b553777d01f1781ddb6bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 23:45:00.994711   14921 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem (1675 bytes)
	I1031 23:45:00.994751   14921 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem (1078 bytes)
	I1031 23:45:00.994778   14921 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem (1123 bytes)
	I1031 23:45:00.994803   14921 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem (1675 bytes)
	I1031 23:45:00.995363   14921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 23:45:01.022451   14921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1031 23:45:01.050540   14921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 23:45:01.074885   14921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1031 23:45:01.098484   14921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 23:45:01.123093   14921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 23:45:01.148286   14921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 23:45:01.173537   14921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1031 23:45:01.197446   14921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 23:45:01.222337   14921 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1031 23:45:01.239070   14921 ssh_runner.go:195] Run: openssl version
	I1031 23:45:01.244353   14921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 23:45:01.254033   14921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 23:45:01.259201   14921 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1031 23:45:01.259264   14921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 23:45:01.265136   14921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 23:45:01.275295   14921 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 23:45:01.279558   14921 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1031 23:45:01.279605   14921 kubeadm.go:404] StartCluster: {Name:addons-798361 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.3 ClusterName:addons-798361 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 23:45:01.279671   14921 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1031 23:45:01.279741   14921 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 23:45:01.318439   14921 cri.go:89] found id: ""
	I1031 23:45:01.318513   14921 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1031 23:45:01.327145   14921 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 23:45:01.335278   14921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 23:45:01.343791   14921 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 23:45:01.343837   14921 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1031 23:45:01.529828   14921 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 23:45:12.991667   14921 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1031 23:45:12.991717   14921 kubeadm.go:322] [preflight] Running pre-flight checks
	I1031 23:45:12.991824   14921 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1031 23:45:12.991919   14921 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1031 23:45:12.992053   14921 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1031 23:45:12.992144   14921 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1031 23:45:12.994192   14921 out.go:204]   - Generating certificates and keys ...
	I1031 23:45:12.994306   14921 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1031 23:45:12.994396   14921 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1031 23:45:12.994491   14921 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1031 23:45:12.994594   14921 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1031 23:45:12.994702   14921 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1031 23:45:12.994773   14921 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1031 23:45:12.994846   14921 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1031 23:45:12.995000   14921 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-798361 localhost] and IPs [192.168.39.214 127.0.0.1 ::1]
	I1031 23:45:12.995076   14921 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1031 23:45:12.995243   14921 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-798361 localhost] and IPs [192.168.39.214 127.0.0.1 ::1]
	I1031 23:45:12.995348   14921 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1031 23:45:12.995431   14921 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1031 23:45:12.995519   14921 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1031 23:45:12.995597   14921 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1031 23:45:12.995664   14921 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1031 23:45:12.995737   14921 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1031 23:45:12.995824   14921 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1031 23:45:12.995912   14921 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1031 23:45:12.996041   14921 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1031 23:45:12.996123   14921 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1031 23:45:12.999436   14921 out.go:204]   - Booting up control plane ...
	I1031 23:45:12.999557   14921 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1031 23:45:12.999656   14921 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1031 23:45:12.999742   14921 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1031 23:45:12.999886   14921 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1031 23:45:13.000015   14921 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1031 23:45:13.000085   14921 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1031 23:45:13.000254   14921 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1031 23:45:13.000359   14921 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003434 seconds
	I1031 23:45:13.000500   14921 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1031 23:45:13.000668   14921 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1031 23:45:13.000749   14921 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1031 23:45:13.000938   14921 kubeadm.go:322] [mark-control-plane] Marking the node addons-798361 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1031 23:45:13.001013   14921 kubeadm.go:322] [bootstrap-token] Using token: eh86in.j27in3y9bx880j8q
	I1031 23:45:13.003267   14921 out.go:204]   - Configuring RBAC rules ...
	I1031 23:45:13.003423   14921 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1031 23:45:13.003582   14921 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1031 23:45:13.003780   14921 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1031 23:45:13.004005   14921 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1031 23:45:13.004175   14921 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1031 23:45:13.004334   14921 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1031 23:45:13.004475   14921 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1031 23:45:13.004513   14921 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1031 23:45:13.004557   14921 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1031 23:45:13.004563   14921 kubeadm.go:322] 
	I1031 23:45:13.004622   14921 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1031 23:45:13.004635   14921 kubeadm.go:322] 
	I1031 23:45:13.004724   14921 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1031 23:45:13.004741   14921 kubeadm.go:322] 
	I1031 23:45:13.004781   14921 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1031 23:45:13.004832   14921 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1031 23:45:13.004875   14921 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1031 23:45:13.004881   14921 kubeadm.go:322] 
	I1031 23:45:13.004977   14921 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1031 23:45:13.004994   14921 kubeadm.go:322] 
	I1031 23:45:13.005089   14921 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1031 23:45:13.005102   14921 kubeadm.go:322] 
	I1031 23:45:13.005178   14921 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1031 23:45:13.005268   14921 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1031 23:45:13.005375   14921 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1031 23:45:13.005385   14921 kubeadm.go:322] 
	I1031 23:45:13.005521   14921 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1031 23:45:13.005618   14921 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1031 23:45:13.005629   14921 kubeadm.go:322] 
	I1031 23:45:13.005747   14921 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token eh86in.j27in3y9bx880j8q \
	I1031 23:45:13.005877   14921 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 \
	I1031 23:45:13.005909   14921 kubeadm.go:322] 	--control-plane 
	I1031 23:45:13.005922   14921 kubeadm.go:322] 
	I1031 23:45:13.006026   14921 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1031 23:45:13.006040   14921 kubeadm.go:322] 
	I1031 23:45:13.006122   14921 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token eh86in.j27in3y9bx880j8q \
	I1031 23:45:13.006275   14921 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 
	I1031 23:45:13.006287   14921 cni.go:84] Creating CNI manager for ""
	I1031 23:45:13.006294   14921 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 23:45:13.009273   14921 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 23:45:13.011117   14921 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 23:45:13.043091   14921 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 23:45:13.075335   14921 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1031 23:45:13.075427   14921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:45:13.075480   14921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9 minikube.k8s.io/name=addons-798361 minikube.k8s.io/updated_at=2023_10_31T23_45_13_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:45:13.137175   14921 ops.go:34] apiserver oom_adj: -16
	I1031 23:45:13.250557   14921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:45:13.384717   14921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:45:13.972788   14921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:45:14.472145   14921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:45:14.972247   14921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:45:15.471835   14921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:45:15.971924   14921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:45:16.472043   14921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:45:16.972582   14921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:45:17.472129   14921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:45:17.972382   14921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:45:18.472245   14921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:45:18.972126   14921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:45:19.472758   14921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:45:19.971824   14921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:45:20.472115   14921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:45:20.972418   14921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:45:21.471910   14921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:45:21.972666   14921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:45:22.471848   14921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:45:22.971828   14921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:45:23.472838   14921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:45:23.972143   14921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:45:24.472181   14921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:45:24.972813   14921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:45:25.086627   14921 kubeadm.go:1081] duration metric: took 12.011258632s to wait for elevateKubeSystemPrivileges.
	I1031 23:45:25.086665   14921 kubeadm.go:406] StartCluster complete in 23.807061531s
	I1031 23:45:25.086687   14921 settings.go:142] acquiring lock: {Name:mk7f269e64dfd8d176737f993e01f6e6badafbc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 23:45:25.086831   14921 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1031 23:45:25.087317   14921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/kubeconfig: {Name:mk08da65b6c71084e1cfafb19800038e8c8303e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 23:45:25.087550   14921 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 23:45:25.087632   14921 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1031 23:45:25.087733   14921 addons.go:69] Setting volumesnapshots=true in profile "addons-798361"
	I1031 23:45:25.087745   14921 addons.go:69] Setting ingress-dns=true in profile "addons-798361"
	I1031 23:45:25.087760   14921 addons.go:231] Setting addon volumesnapshots=true in "addons-798361"
	I1031 23:45:25.087762   14921 addons.go:231] Setting addon ingress-dns=true in "addons-798361"
	I1031 23:45:25.087767   14921 config.go:182] Loaded profile config "addons-798361": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 23:45:25.087791   14921 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-798361"
	I1031 23:45:25.087812   14921 addons.go:69] Setting inspektor-gadget=true in profile "addons-798361"
	I1031 23:45:25.087820   14921 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-798361"
	I1031 23:45:25.087823   14921 host.go:66] Checking if "addons-798361" exists ...
	I1031 23:45:25.087829   14921 addons.go:69] Setting cloud-spanner=true in profile "addons-798361"
	I1031 23:45:25.087836   14921 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-798361"
	I1031 23:45:25.087840   14921 addons.go:231] Setting addon cloud-spanner=true in "addons-798361"
	I1031 23:45:25.087864   14921 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-798361"
	I1031 23:45:25.087873   14921 host.go:66] Checking if "addons-798361" exists ...
	I1031 23:45:25.087882   14921 host.go:66] Checking if "addons-798361" exists ...
	I1031 23:45:25.087907   14921 host.go:66] Checking if "addons-798361" exists ...
	I1031 23:45:25.088167   14921 addons.go:69] Setting metrics-server=true in profile "addons-798361"
	I1031 23:45:25.088200   14921 addons.go:231] Setting addon metrics-server=true in "addons-798361"
	I1031 23:45:25.087830   14921 addons.go:69] Setting gcp-auth=true in profile "addons-798361"
	I1031 23:45:25.088248   14921 addons.go:69] Setting default-storageclass=true in profile "addons-798361"
	I1031 23:45:25.088252   14921 host.go:66] Checking if "addons-798361" exists ...
	I1031 23:45:25.088260   14921 mustload.go:65] Loading cluster: addons-798361
	I1031 23:45:25.088261   14921 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-798361"
	I1031 23:45:25.088279   14921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:45:25.088305   14921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:45:25.088307   14921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:45:25.088330   14921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:45:25.088437   14921 config.go:182] Loaded profile config "addons-798361": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 23:45:25.088498   14921 addons.go:69] Setting helm-tiller=true in profile "addons-798361"
	I1031 23:45:25.088519   14921 addons.go:231] Setting addon helm-tiller=true in "addons-798361"
	I1031 23:45:25.088533   14921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:45:25.088572   14921 host.go:66] Checking if "addons-798361" exists ...
	I1031 23:45:25.088588   14921 addons.go:69] Setting storage-provisioner=true in profile "addons-798361"
	I1031 23:45:25.088598   14921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:45:25.088605   14921 addons.go:231] Setting addon storage-provisioner=true in "addons-798361"
	I1031 23:45:25.088639   14921 addons.go:69] Setting registry=true in profile "addons-798361"
	I1031 23:45:25.088649   14921 addons.go:231] Setting addon registry=true in "addons-798361"
	I1031 23:45:25.088655   14921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:45:25.088676   14921 host.go:66] Checking if "addons-798361" exists ...
	I1031 23:45:25.088681   14921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:45:25.088721   14921 addons.go:69] Setting ingress=true in profile "addons-798361"
	I1031 23:45:25.088732   14921 addons.go:231] Setting addon ingress=true in "addons-798361"
	I1031 23:45:25.088739   14921 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-798361"
	I1031 23:45:25.088749   14921 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-798361"
	I1031 23:45:25.088766   14921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:45:25.088768   14921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:45:25.088782   14921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:45:25.088793   14921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:45:25.088806   14921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:45:25.088781   14921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:45:25.087822   14921 addons.go:231] Setting addon inspektor-gadget=true in "addons-798361"
	I1031 23:45:25.087823   14921 host.go:66] Checking if "addons-798361" exists ...
	I1031 23:45:25.088915   14921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:45:25.088960   14921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:45:25.089035   14921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:45:25.089072   14921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:45:25.089176   14921 host.go:66] Checking if "addons-798361" exists ...
	I1031 23:45:25.089232   14921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:45:25.089260   14921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:45:25.089324   14921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:45:25.089337   14921 host.go:66] Checking if "addons-798361" exists ...
	I1031 23:45:25.089358   14921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:45:25.089466   14921 host.go:66] Checking if "addons-798361" exists ...
	I1031 23:45:25.089821   14921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:45:25.089890   14921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:45:25.108532   14921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46227
	I1031 23:45:25.108560   14921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34451
	I1031 23:45:25.108774   14921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41173
	I1031 23:45:25.108911   14921 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:45:25.109222   14921 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:45:25.109225   14921 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:45:25.109356   14921 main.go:141] libmachine: Using API Version  1
	I1031 23:45:25.109365   14921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37527
	I1031 23:45:25.109374   14921 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:45:25.109691   14921 main.go:141] libmachine: Using API Version  1
	I1031 23:45:25.109705   14921 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:45:25.109838   14921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43327
	I1031 23:45:25.109940   14921 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:45:25.109989   14921 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:45:25.110263   14921 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:45:25.110338   14921 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:45:25.110344   14921 main.go:141] libmachine: Using API Version  1
	I1031 23:45:25.110358   14921 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:45:25.110564   14921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:45:25.110615   14921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:45:25.110669   14921 main.go:141] libmachine: Using API Version  1
	I1031 23:45:25.110688   14921 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:45:25.110689   14921 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:45:25.110722   14921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:45:25.110754   14921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:45:25.111205   14921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:45:25.111258   14921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:45:25.111309   14921 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:45:25.111335   14921 main.go:141] libmachine: Using API Version  1
	I1031 23:45:25.111348   14921 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:45:25.111521   14921 main.go:141] libmachine: (addons-798361) Calling .GetState
	I1031 23:45:25.111750   14921 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:45:25.112296   14921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:45:25.112346   14921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:45:25.113594   14921 host.go:66] Checking if "addons-798361" exists ...
	I1031 23:45:25.114006   14921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:45:25.114046   14921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:45:25.116558   14921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:45:25.116596   14921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:45:25.116750   14921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33233
	I1031 23:45:25.118162   14921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:45:25.118186   14921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:45:25.132835   14921 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:45:25.132964   14921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35635
	I1031 23:45:25.133486   14921 main.go:141] libmachine: Using API Version  1
	I1031 23:45:25.133510   14921 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:45:25.133580   14921 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:45:25.134022   14921 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:45:25.134110   14921 main.go:141] libmachine: Using API Version  1
	I1031 23:45:25.134129   14921 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:45:25.134320   14921 main.go:141] libmachine: (addons-798361) Calling .GetState
	I1031 23:45:25.135101   14921 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:45:25.135578   14921 main.go:141] libmachine: (addons-798361) Calling .GetState
	I1031 23:45:25.138150   14921 addons.go:231] Setting addon default-storageclass=true in "addons-798361"
	I1031 23:45:25.138192   14921 host.go:66] Checking if "addons-798361" exists ...
	I1031 23:45:25.138590   14921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:45:25.138631   14921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:45:25.138856   14921 main.go:141] libmachine: (addons-798361) Calling .DriverName
	I1031 23:45:25.141051   14921 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1031 23:45:25.139331   14921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40701
	I1031 23:45:25.142468   14921 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1031 23:45:25.142484   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1031 23:45:25.142503   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHHostname
	I1031 23:45:25.143316   14921 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:45:25.143819   14921 main.go:141] libmachine: Using API Version  1
	I1031 23:45:25.143838   14921 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:45:25.144277   14921 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:45:25.144492   14921 main.go:141] libmachine: (addons-798361) Calling .DriverName
	I1031 23:45:25.146104   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:45:25.146137   14921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43903
	I1031 23:45:25.146244   14921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36845
	I1031 23:45:25.146392   14921 main.go:141] libmachine: (addons-798361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:da:27", ip: ""} in network mk-addons-798361: {Iface:virbr1 ExpiryTime:2023-11-01 00:44:44 +0000 UTC Type:0 Mac:52:54:00:0f:da:27 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-798361 Clientid:01:52:54:00:0f:da:27}
	I1031 23:45:25.146410   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined IP address 192.168.39.214 and MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:45:25.146551   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHPort
	I1031 23:45:25.146730   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHKeyPath
	I1031 23:45:25.146923   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHUsername
	I1031 23:45:25.147310   14921 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/addons-798361/id_rsa Username:docker}
	I1031 23:45:25.148141   14921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41967
	I1031 23:45:25.149013   14921 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:45:25.149033   14921 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:45:25.149571   14921 main.go:141] libmachine: Using API Version  1
	I1031 23:45:25.149589   14921 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:45:25.149712   14921 main.go:141] libmachine: Using API Version  1
	I1031 23:45:25.149722   14921 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:45:25.150107   14921 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:45:25.150197   14921 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:45:25.150368   14921 main.go:141] libmachine: (addons-798361) Calling .GetState
	I1031 23:45:25.150551   14921 main.go:141] libmachine: (addons-798361) Calling .GetState
	I1031 23:45:25.151194   14921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43865
	I1031 23:45:25.152296   14921 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:45:25.152676   14921 main.go:141] libmachine: (addons-798361) Calling .DriverName
	I1031 23:45:25.152806   14921 main.go:141] libmachine: Using API Version  1
	I1031 23:45:25.152817   14921 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:45:25.154689   14921 main.go:141] libmachine: (addons-798361) Calling .DriverName
	I1031 23:45:25.154704   14921 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1031 23:45:25.153380   14921 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:45:25.153502   14921 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:45:25.157822   14921 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.2
	I1031 23:45:25.155490   14921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45045
	I1031 23:45:25.156076   14921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45699
	I1031 23:45:25.156583   14921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:45:25.156698   14921 main.go:141] libmachine: Using API Version  1
	I1031 23:45:25.158171   14921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34539
	I1031 23:45:25.158300   14921 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:45:25.159411   14921 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1031 23:45:25.160831   14921 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1031 23:45:25.159465   14921 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1031 23:45:25.159508   14921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:45:25.159518   14921 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:45:25.159855   14921 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:45:25.160880   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1031 23:45:25.159898   14921 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:45:25.160313   14921 main.go:141] libmachine: Using API Version  1
	I1031 23:45:25.160907   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHHostname
	I1031 23:45:25.161498   14921 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:45:25.161556   14921 main.go:141] libmachine: Using API Version  1
	I1031 23:45:25.162636   14921 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:45:25.164001   14921 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1031 23:45:25.162739   14921 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:45:25.163080   14921 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:45:25.163277   14921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43427
	I1031 23:45:25.163417   14921 main.go:141] libmachine: Using API Version  1
	I1031 23:45:25.165594   14921 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:45:25.163437   14921 main.go:141] libmachine: (addons-798361) Calling .GetState
	I1031 23:45:25.164205   14921 main.go:141] libmachine: (addons-798361) Calling .GetState
	I1031 23:45:25.164411   14921 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:45:25.165445   14921 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1031 23:45:25.167212   14921 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1031 23:45:25.171886   14921 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1031 23:45:25.166604   14921 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:45:25.167002   14921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:45:25.167022   14921 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:45:25.168295   14921 main.go:141] libmachine: (addons-798361) Calling .DriverName
	I1031 23:45:25.169245   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:45:25.169837   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHPort
	I1031 23:45:25.170300   14921 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-798361"
	I1031 23:45:25.173331   14921 host.go:66] Checking if "addons-798361" exists ...
	I1031 23:45:25.173728   14921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:45:25.173761   14921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:45:25.175266   14921 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1031 23:45:25.174181   14921 main.go:141] libmachine: (addons-798361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:da:27", ip: ""} in network mk-addons-798361: {Iface:virbr1 ExpiryTime:2023-11-01 00:44:44 +0000 UTC Type:0 Mac:52:54:00:0f:da:27 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-798361 Clientid:01:52:54:00:0f:da:27}
	I1031 23:45:25.176566   14921 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1031 23:45:25.176584   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1031 23:45:25.176603   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHHostname
	I1031 23:45:25.174272   14921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:45:25.174560   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHKeyPath
	I1031 23:45:25.174680   14921 main.go:141] libmachine: Using API Version  1
	I1031 23:45:25.176923   14921 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:45:25.178175   14921 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.11
	I1031 23:45:25.175149   14921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:45:25.175312   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined IP address 192.168.39.214 and MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:45:25.175591   14921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39735
	I1031 23:45:25.177126   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHUsername
	I1031 23:45:25.177555   14921 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:45:25.178851   14921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36815
	I1031 23:45:25.179712   14921 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1031 23:45:25.179726   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1031 23:45:25.179744   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHHostname
	I1031 23:45:25.179746   14921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:45:25.180284   14921 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:45:25.180284   14921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:45:25.180326   14921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:45:25.180641   14921 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/addons-798361/id_rsa Username:docker}
	I1031 23:45:25.180670   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:45:25.180727   14921 main.go:141] libmachine: Using API Version  1
	I1031 23:45:25.180740   14921 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:45:25.180957   14921 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:45:25.181029   14921 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:45:25.181505   14921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:45:25.181544   14921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:45:25.181647   14921 main.go:141] libmachine: Using API Version  1
	I1031 23:45:25.181664   14921 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:45:25.181908   14921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43873
	I1031 23:45:25.182028   14921 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:45:25.182685   14921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:45:25.182710   14921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:45:25.182899   14921 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:45:25.182998   14921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45969
	I1031 23:45:25.183464   14921 main.go:141] libmachine: Using API Version  1
	I1031 23:45:25.183482   14921 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:45:25.183834   14921 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:45:25.183898   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:45:25.184398   14921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:45:25.184438   14921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:45:25.184890   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHPort
	I1031 23:45:25.184920   14921 main.go:141] libmachine: (addons-798361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:da:27", ip: ""} in network mk-addons-798361: {Iface:virbr1 ExpiryTime:2023-11-01 00:44:44 +0000 UTC Type:0 Mac:52:54:00:0f:da:27 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-798361 Clientid:01:52:54:00:0f:da:27}
	I1031 23:45:25.184939   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined IP address 192.168.39.214 and MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:45:25.184949   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHPort
	I1031 23:45:25.184992   14921 main.go:141] libmachine: (addons-798361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:da:27", ip: ""} in network mk-addons-798361: {Iface:virbr1 ExpiryTime:2023-11-01 00:44:44 +0000 UTC Type:0 Mac:52:54:00:0f:da:27 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-798361 Clientid:01:52:54:00:0f:da:27}
	I1031 23:45:25.185018   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined IP address 192.168.39.214 and MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:45:25.185082   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHKeyPath
	I1031 23:45:25.185118   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHKeyPath
	I1031 23:45:25.185225   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHUsername
	I1031 23:45:25.185272   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHUsername
	I1031 23:45:25.185390   14921 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/addons-798361/id_rsa Username:docker}
	I1031 23:45:25.185390   14921 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/addons-798361/id_rsa Username:docker}
	I1031 23:45:25.189719   14921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33167
	I1031 23:45:25.190090   14921 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:45:25.190457   14921 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:45:25.190688   14921 main.go:141] libmachine: Using API Version  1
	I1031 23:45:25.190703   14921 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:45:25.191096   14921 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:45:25.191229   14921 main.go:141] libmachine: (addons-798361) Calling .GetState
	I1031 23:45:25.191732   14921 main.go:141] libmachine: Using API Version  1
	I1031 23:45:25.191748   14921 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:45:25.192284   14921 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:45:25.192866   14921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:45:25.192903   14921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:45:25.193145   14921 main.go:141] libmachine: (addons-798361) Calling .DriverName
	I1031 23:45:25.194956   14921 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1031 23:45:25.196194   14921 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1031 23:45:25.198021   14921 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1031 23:45:25.199942   14921 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1031 23:45:25.200011   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1031 23:45:25.200040   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHHostname
	I1031 23:45:25.203165   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:45:25.203582   14921 main.go:141] libmachine: (addons-798361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:da:27", ip: ""} in network mk-addons-798361: {Iface:virbr1 ExpiryTime:2023-11-01 00:44:44 +0000 UTC Type:0 Mac:52:54:00:0f:da:27 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-798361 Clientid:01:52:54:00:0f:da:27}
	I1031 23:45:25.203644   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined IP address 192.168.39.214 and MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:45:25.203804   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHPort
	I1031 23:45:25.203993   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHKeyPath
	I1031 23:45:25.204490   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHUsername
	I1031 23:45:25.204635   14921 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/addons-798361/id_rsa Username:docker}
	I1031 23:45:25.209864   14921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40065
	I1031 23:45:25.210253   14921 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:45:25.210729   14921 main.go:141] libmachine: Using API Version  1
	I1031 23:45:25.210751   14921 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:45:25.210817   14921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33489
	I1031 23:45:25.211093   14921 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:45:25.212023   14921 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:45:25.212122   14921 main.go:141] libmachine: (addons-798361) Calling .GetState
	I1031 23:45:25.213170   14921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43857
	I1031 23:45:25.213253   14921 main.go:141] libmachine: Using API Version  1
	I1031 23:45:25.213270   14921 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:45:25.213603   14921 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:45:25.213703   14921 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:45:25.213981   14921 main.go:141] libmachine: (addons-798361) Calling .GetState
	I1031 23:45:25.214225   14921 main.go:141] libmachine: (addons-798361) Calling .DriverName
	I1031 23:45:25.214374   14921 main.go:141] libmachine: Using API Version  1
	I1031 23:45:25.214387   14921 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:45:25.214728   14921 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:45:25.214869   14921 main.go:141] libmachine: (addons-798361) Calling .GetState
	I1031 23:45:25.215002   14921 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1031 23:45:25.215017   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1031 23:45:25.215033   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHHostname
	I1031 23:45:25.215506   14921 main.go:141] libmachine: (addons-798361) Calling .DriverName
	I1031 23:45:25.215623   14921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44803
	I1031 23:45:25.217575   14921 out.go:177]   - Using image docker.io/registry:2.8.3
	I1031 23:45:25.215953   14921 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:45:25.217922   14921 main.go:141] libmachine: (addons-798361) Calling .DriverName
	I1031 23:45:25.220330   14921 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1031 23:45:25.219246   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:45:25.219572   14921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42741
	I1031 23:45:25.219959   14921 main.go:141] libmachine: Using API Version  1
	I1031 23:45:25.220008   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHPort
	I1031 23:45:25.221862   14921 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1031 23:45:25.221890   14921 main.go:141] libmachine: (addons-798361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:da:27", ip: ""} in network mk-addons-798361: {Iface:virbr1 ExpiryTime:2023-11-01 00:44:44 +0000 UTC Type:0 Mac:52:54:00:0f:da:27 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-798361 Clientid:01:52:54:00:0f:da:27}
	I1031 23:45:25.222406   14921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33243
	I1031 23:45:25.223057   14921 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:45:25.223133   14921 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 23:45:25.224704   14921 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 23:45:25.222443   14921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46773
	I1031 23:45:25.224718   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1031 23:45:25.224734   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHHostname
	I1031 23:45:25.223149   14921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38499
	I1031 23:45:25.223195   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1031 23:45:25.224858   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHHostname
	I1031 23:45:25.223212   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined IP address 192.168.39.214 and MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:45:25.223341   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHKeyPath
	I1031 23:45:25.223448   14921 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:45:25.223615   14921 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:45:25.223903   14921 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:45:25.225053   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHUsername
	I1031 23:45:25.225077   14921 main.go:141] libmachine: (addons-798361) Calling .GetState
	I1031 23:45:25.225168   14921 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:45:25.225249   14921 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/addons-798361/id_rsa Username:docker}
	I1031 23:45:25.225843   14921 main.go:141] libmachine: Using API Version  1
	I1031 23:45:25.225854   14921 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:45:25.225976   14921 main.go:141] libmachine: Using API Version  1
	I1031 23:45:25.225987   14921 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:45:25.226472   14921 main.go:141] libmachine: Using API Version  1
	I1031 23:45:25.226488   14921 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:45:25.226527   14921 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:45:25.226600   14921 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:45:25.226849   14921 main.go:141] libmachine: (addons-798361) Calling .GetState
	I1031 23:45:25.227089   14921 main.go:141] libmachine: (addons-798361) Calling .DriverName
	I1031 23:45:25.227148   14921 main.go:141] libmachine: (addons-798361) Calling .GetState
	I1031 23:45:25.227355   14921 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:45:25.228968   14921 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1031 23:45:25.230877   14921 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1031 23:45:25.230887   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1031 23:45:25.230897   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHHostname
	I1031 23:45:25.228214   14921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:45:25.230956   14921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:45:25.228902   14921 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:45:25.229581   14921 main.go:141] libmachine: (addons-798361) Calling .DriverName
	I1031 23:45:25.230212   14921 main.go:141] libmachine: (addons-798361) Calling .DriverName
	I1031 23:45:25.230394   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:45:25.231265   14921 main.go:141] libmachine: (addons-798361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:da:27", ip: ""} in network mk-addons-798361: {Iface:virbr1 ExpiryTime:2023-11-01 00:44:44 +0000 UTC Type:0 Mac:52:54:00:0f:da:27 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-798361 Clientid:01:52:54:00:0f:da:27}
	I1031 23:45:25.231285   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined IP address 192.168.39.214 and MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:45:25.230793   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:45:25.231374   14921 main.go:141] libmachine: (addons-798361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:da:27", ip: ""} in network mk-addons-798361: {Iface:virbr1 ExpiryTime:2023-11-01 00:44:44 +0000 UTC Type:0 Mac:52:54:00:0f:da:27 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-798361 Clientid:01:52:54:00:0f:da:27}
	I1031 23:45:25.231408   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined IP address 192.168.39.214 and MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:45:25.232896   14921 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1031 23:45:25.231792   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHPort
	I1031 23:45:25.231821   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHPort
	I1031 23:45:25.232003   14921 main.go:141] libmachine: Using API Version  1
	I1031 23:45:25.233655   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:45:25.234296   14921 main.go:141] libmachine: (addons-798361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:da:27", ip: ""} in network mk-addons-798361: {Iface:virbr1 ExpiryTime:2023-11-01 00:44:44 +0000 UTC Type:0 Mac:52:54:00:0f:da:27 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-798361 Clientid:01:52:54:00:0f:da:27}
	I1031 23:45:25.234311   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined IP address 192.168.39.214 and MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:45:25.234320   14921 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:45:25.234328   14921 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1031 23:45:25.234330   14921 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1031 23:45:25.234339   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1031 23:45:25.235680   14921 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1031 23:45:25.235699   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1031 23:45:25.235718   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHHostname
	I1031 23:45:25.234239   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHPort
	I1031 23:45:25.234353   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHHostname
	I1031 23:45:25.234524   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHKeyPath
	I1031 23:45:25.234555   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHKeyPath
	I1031 23:45:25.234827   14921 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:45:25.235974   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHUsername
	I1031 23:45:25.236110   14921 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/addons-798361/id_rsa Username:docker}
	I1031 23:45:25.236174   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHKeyPath
	I1031 23:45:25.236251   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHUsername
	I1031 23:45:25.236439   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHUsername
	I1031 23:45:25.236462   14921 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/addons-798361/id_rsa Username:docker}
	I1031 23:45:25.236605   14921 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/addons-798361/id_rsa Username:docker}
	I1031 23:45:25.238159   14921 main.go:141] libmachine: (addons-798361) Calling .GetState
	I1031 23:45:25.239446   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:45:25.239778   14921 main.go:141] libmachine: (addons-798361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:da:27", ip: ""} in network mk-addons-798361: {Iface:virbr1 ExpiryTime:2023-11-01 00:44:44 +0000 UTC Type:0 Mac:52:54:00:0f:da:27 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-798361 Clientid:01:52:54:00:0f:da:27}
	I1031 23:45:25.239798   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined IP address 192.168.39.214 and MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:45:25.239810   14921 main.go:141] libmachine: (addons-798361) Calling .DriverName
	I1031 23:45:25.239877   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:45:25.239941   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHPort
	I1031 23:45:25.241585   14921 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.21.0
	I1031 23:45:25.240129   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHKeyPath
	I1031 23:45:25.240279   14921 main.go:141] libmachine: (addons-798361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:da:27", ip: ""} in network mk-addons-798361: {Iface:virbr1 ExpiryTime:2023-11-01 00:44:44 +0000 UTC Type:0 Mac:52:54:00:0f:da:27 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-798361 Clientid:01:52:54:00:0f:da:27}
	I1031 23:45:25.240451   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHPort
	I1031 23:45:25.242938   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined IP address 192.168.39.214 and MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:45:25.242969   14921 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1031 23:45:25.242985   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1031 23:45:25.243003   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHHostname
	I1031 23:45:25.243511   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHUsername
	I1031 23:45:25.243579   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHKeyPath
	I1031 23:45:25.243775   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHUsername
	I1031 23:45:25.243806   14921 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/addons-798361/id_rsa Username:docker}
	I1031 23:45:25.243992   14921 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/addons-798361/id_rsa Username:docker}
	I1031 23:45:25.246390   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:45:25.246801   14921 main.go:141] libmachine: (addons-798361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:da:27", ip: ""} in network mk-addons-798361: {Iface:virbr1 ExpiryTime:2023-11-01 00:44:44 +0000 UTC Type:0 Mac:52:54:00:0f:da:27 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-798361 Clientid:01:52:54:00:0f:da:27}
	I1031 23:45:25.246822   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined IP address 192.168.39.214 and MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:45:25.246964   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHPort
	I1031 23:45:25.247107   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHKeyPath
	I1031 23:45:25.247204   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHUsername
	I1031 23:45:25.247299   14921 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/addons-798361/id_rsa Username:docker}
	I1031 23:45:25.252234   14921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42537
	I1031 23:45:25.252559   14921 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:45:25.253012   14921 main.go:141] libmachine: Using API Version  1
	I1031 23:45:25.253031   14921 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:45:25.253305   14921 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:45:25.253475   14921 main.go:141] libmachine: (addons-798361) Calling .GetState
	I1031 23:45:25.254985   14921 main.go:141] libmachine: (addons-798361) Calling .DriverName
	I1031 23:45:25.256788   14921 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1031 23:45:25.258196   14921 out.go:177]   - Using image docker.io/busybox:stable
	I1031 23:45:25.259565   14921 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1031 23:45:25.259581   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1031 23:45:25.259598   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHHostname
	I1031 23:45:25.262245   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:45:25.262752   14921 main.go:141] libmachine: (addons-798361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:da:27", ip: ""} in network mk-addons-798361: {Iface:virbr1 ExpiryTime:2023-11-01 00:44:44 +0000 UTC Type:0 Mac:52:54:00:0f:da:27 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-798361 Clientid:01:52:54:00:0f:da:27}
	I1031 23:45:25.262791   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined IP address 192.168.39.214 and MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:45:25.262885   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHPort
	I1031 23:45:25.263045   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHKeyPath
	I1031 23:45:25.263176   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHUsername
	I1031 23:45:25.263264   14921 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/addons-798361/id_rsa Username:docker}
	I1031 23:45:25.347599   14921 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-798361" context rescaled to 1 replicas
	I1031 23:45:25.347640   14921 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1031 23:45:25.349131   14921 out.go:177] * Verifying Kubernetes components...
	I1031 23:45:25.350838   14921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 23:45:25.390160   14921 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1031 23:45:25.390185   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1031 23:45:25.417415   14921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1031 23:45:25.452710   14921 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1031 23:45:25.452732   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1031 23:45:25.462638   14921 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1031 23:45:25.463549   14921 node_ready.go:35] waiting up to 6m0s for node "addons-798361" to be "Ready" ...
	I1031 23:45:25.520336   14921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1031 23:45:25.520499   14921 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1031 23:45:25.520541   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1031 23:45:25.524704   14921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1031 23:45:25.554693   14921 node_ready.go:49] node "addons-798361" has status "Ready":"True"
	I1031 23:45:25.554716   14921 node_ready.go:38] duration metric: took 91.140003ms waiting for node "addons-798361" to be "Ready" ...
	I1031 23:45:25.554724   14921 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 23:45:25.584955   14921 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1031 23:45:25.584983   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1031 23:45:25.605640   14921 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1031 23:45:25.605662   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1031 23:45:25.616210   14921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1031 23:45:25.621431   14921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1031 23:45:25.629207   14921 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1031 23:45:25.629234   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1031 23:45:25.632219   14921 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1031 23:45:25.632241   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1031 23:45:25.645441   14921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1031 23:45:25.652558   14921 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1031 23:45:25.652583   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1031 23:45:25.694446   14921 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2fzrd" in "kube-system" namespace to be "Ready" ...
	I1031 23:45:25.716079   14921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 23:45:25.736594   14921 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1031 23:45:25.736623   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1031 23:45:25.827396   14921 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1031 23:45:25.827426   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1031 23:45:25.837086   14921 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1031 23:45:25.837112   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1031 23:45:25.846196   14921 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1031 23:45:25.846225   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1031 23:45:25.849694   14921 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1031 23:45:25.849712   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1031 23:45:25.966454   14921 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1031 23:45:25.966479   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1031 23:45:26.027317   14921 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 23:45:26.027343   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1031 23:45:26.051407   14921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1031 23:45:26.065308   14921 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1031 23:45:26.065335   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1031 23:45:26.083061   14921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1031 23:45:26.084643   14921 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1031 23:45:26.084658   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1031 23:45:26.119875   14921 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1031 23:45:26.119902   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1031 23:45:26.176376   14921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 23:45:26.194757   14921 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1031 23:45:26.194778   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1031 23:45:26.292649   14921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1031 23:45:26.307081   14921 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1031 23:45:26.307103   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1031 23:45:26.309055   14921 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1031 23:45:26.309077   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1031 23:45:26.368839   14921 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1031 23:45:26.368864   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1031 23:45:26.423415   14921 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1031 23:45:26.423436   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1031 23:45:26.467805   14921 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1031 23:45:26.467831   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1031 23:45:26.523966   14921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1031 23:45:26.536307   14921 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1031 23:45:26.536335   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1031 23:45:26.604217   14921 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1031 23:45:26.604247   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1031 23:45:26.651187   14921 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1031 23:45:26.651206   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1031 23:45:26.699309   14921 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1031 23:45:26.699333   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1031 23:45:26.737647   14921 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1031 23:45:26.737673   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1031 23:45:26.785399   14921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1031 23:45:28.088935   14921 pod_ready.go:102] pod "coredns-5dd5756b68-2fzrd" in "kube-system" namespace has status "Ready":"False"
	I1031 23:45:29.629802   14921 pod_ready.go:97] pod "coredns-5dd5756b68-2fzrd" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-31 23:45:25 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-31 23:45:25 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-31 23:45:25 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-31 23:45:25 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.214 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-10-31 23:45:25 +0000 UTC InitContainerStatuses:[] ContainerS
tatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:137,Signal:0,Reason:ContainerStatusUnknown,Message:The container could not be located when the pod was terminated,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID: ContainerID: Started:0xc003b5cb2a AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1031 23:45:29.629839   14921 pod_ready.go:81] duration metric: took 3.935357243s waiting for pod "coredns-5dd5756b68-2fzrd" in "kube-system" namespace to be "Ready" ...
	E1031 23:45:29.629851   14921 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-2fzrd" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-31 23:45:25 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-31 23:45:25 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-31 23:45:25 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-31 23:45:25 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.214 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-10-31 23:45:25 +0000 UTC InitCo
ntainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:137,Signal:0,Reason:ContainerStatusUnknown,Message:The container could not be located when the pod was terminated,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID: ContainerID: Started:0xc003b5cb2a AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1031 23:45:29.629861   14921 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mxmdx" in "kube-system" namespace to be "Ready" ...
	I1031 23:45:30.765996   14921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.348543121s)
	I1031 23:45:30.766015   14921 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.303333917s)
	I1031 23:45:30.766048   14921 main.go:141] libmachine: Making call to close driver server
	I1031 23:45:30.766053   14921 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1031 23:45:30.766062   14921 main.go:141] libmachine: (addons-798361) Calling .Close
	I1031 23:45:30.766129   14921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.245757805s)
	I1031 23:45:30.766169   14921 main.go:141] libmachine: Making call to close driver server
	I1031 23:45:30.766185   14921 main.go:141] libmachine: (addons-798361) Calling .Close
	I1031 23:45:30.766333   14921 main.go:141] libmachine: Successfully made call to close driver server
	I1031 23:45:30.766348   14921 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 23:45:30.766358   14921 main.go:141] libmachine: Making call to close driver server
	I1031 23:45:30.766368   14921 main.go:141] libmachine: (addons-798361) Calling .Close
	I1031 23:45:30.766476   14921 main.go:141] libmachine: (addons-798361) DBG | Closing plugin on server side
	I1031 23:45:30.766503   14921 main.go:141] libmachine: Successfully made call to close driver server
	I1031 23:45:30.766520   14921 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 23:45:30.766543   14921 main.go:141] libmachine: Making call to close driver server
	I1031 23:45:30.766565   14921 main.go:141] libmachine: (addons-798361) Calling .Close
	I1031 23:45:30.766606   14921 main.go:141] libmachine: Successfully made call to close driver server
	I1031 23:45:30.766618   14921 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 23:45:30.766828   14921 main.go:141] libmachine: Successfully made call to close driver server
	I1031 23:45:30.766847   14921 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 23:45:32.032966   14921 pod_ready.go:102] pod "coredns-5dd5756b68-mxmdx" in "kube-system" namespace has status "Ready":"False"
	I1031 23:45:32.033749   14921 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1031 23:45:32.033779   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHHostname
	I1031 23:45:32.037028   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:45:32.037473   14921 main.go:141] libmachine: (addons-798361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:da:27", ip: ""} in network mk-addons-798361: {Iface:virbr1 ExpiryTime:2023-11-01 00:44:44 +0000 UTC Type:0 Mac:52:54:00:0f:da:27 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-798361 Clientid:01:52:54:00:0f:da:27}
	I1031 23:45:32.037510   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined IP address 192.168.39.214 and MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:45:32.037674   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHPort
	I1031 23:45:32.037888   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHKeyPath
	I1031 23:45:32.038055   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHUsername
	I1031 23:45:32.038208   14921 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/addons-798361/id_rsa Username:docker}
	I1031 23:45:32.267032   14921 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1031 23:45:32.331148   14921 addons.go:231] Setting addon gcp-auth=true in "addons-798361"
	I1031 23:45:32.331203   14921 host.go:66] Checking if "addons-798361" exists ...
	I1031 23:45:32.331622   14921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:45:32.331666   14921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:45:32.348904   14921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45485
	I1031 23:45:32.349316   14921 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:45:32.349789   14921 main.go:141] libmachine: Using API Version  1
	I1031 23:45:32.349813   14921 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:45:32.350134   14921 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:45:32.350754   14921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:45:32.350802   14921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:45:32.367746   14921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35581
	I1031 23:45:32.368251   14921 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:45:32.368803   14921 main.go:141] libmachine: Using API Version  1
	I1031 23:45:32.368833   14921 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:45:32.369159   14921 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:45:32.369467   14921 main.go:141] libmachine: (addons-798361) Calling .GetState
	I1031 23:45:32.371484   14921 main.go:141] libmachine: (addons-798361) Calling .DriverName
	I1031 23:45:32.371768   14921 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1031 23:45:32.371798   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHHostname
	I1031 23:45:32.374969   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:45:32.375754   14921 main.go:141] libmachine: (addons-798361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:da:27", ip: ""} in network mk-addons-798361: {Iface:virbr1 ExpiryTime:2023-11-01 00:44:44 +0000 UTC Type:0 Mac:52:54:00:0f:da:27 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-798361 Clientid:01:52:54:00:0f:da:27}
	I1031 23:45:32.375792   14921 main.go:141] libmachine: (addons-798361) DBG | domain addons-798361 has defined IP address 192.168.39.214 and MAC address 52:54:00:0f:da:27 in network mk-addons-798361
	I1031 23:45:32.376011   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHPort
	I1031 23:45:32.376205   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHKeyPath
	I1031 23:45:32.376363   14921 main.go:141] libmachine: (addons-798361) Calling .GetSSHUsername
	I1031 23:45:32.376512   14921 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/addons-798361/id_rsa Username:docker}
	I1031 23:45:33.877219   14921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.352479092s)
	I1031 23:45:33.877264   14921 main.go:141] libmachine: Making call to close driver server
	I1031 23:45:33.877278   14921 main.go:141] libmachine: (addons-798361) Calling .Close
	I1031 23:45:33.877306   14921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.261069772s)
	I1031 23:45:33.877327   14921 main.go:141] libmachine: Making call to close driver server
	I1031 23:45:33.877340   14921 main.go:141] libmachine: (addons-798361) Calling .Close
	I1031 23:45:33.877382   14921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.2559272s)
	I1031 23:45:33.877415   14921 main.go:141] libmachine: Making call to close driver server
	I1031 23:45:33.877428   14921 main.go:141] libmachine: (addons-798361) Calling .Close
	I1031 23:45:33.877536   14921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.232068375s)
	I1031 23:45:33.877557   14921 main.go:141] libmachine: Making call to close driver server
	I1031 23:45:33.877565   14921 main.go:141] libmachine: (addons-798361) Calling .Close
	I1031 23:45:33.877651   14921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.161547529s)
	I1031 23:45:33.877666   14921 main.go:141] libmachine: Making call to close driver server
	I1031 23:45:33.877674   14921 main.go:141] libmachine: (addons-798361) Calling .Close
	I1031 23:45:33.877676   14921 main.go:141] libmachine: Successfully made call to close driver server
	I1031 23:45:33.877681   14921 main.go:141] libmachine: (addons-798361) DBG | Closing plugin on server side
	I1031 23:45:33.877686   14921 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 23:45:33.877704   14921 main.go:141] libmachine: Making call to close driver server
	I1031 23:45:33.877707   14921 main.go:141] libmachine: Successfully made call to close driver server
	I1031 23:45:33.877713   14921 main.go:141] libmachine: (addons-798361) Calling .Close
	I1031 23:45:33.877718   14921 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 23:45:33.877729   14921 main.go:141] libmachine: Making call to close driver server
	I1031 23:45:33.877737   14921 main.go:141] libmachine: (addons-798361) Calling .Close
	I1031 23:45:33.877744   14921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.826309552s)
	I1031 23:45:33.877761   14921 main.go:141] libmachine: Making call to close driver server
	I1031 23:45:33.877784   14921 main.go:141] libmachine: (addons-798361) Calling .Close
	I1031 23:45:33.877801   14921 main.go:141] libmachine: (addons-798361) DBG | Closing plugin on server side
	I1031 23:45:33.877828   14921 main.go:141] libmachine: Successfully made call to close driver server
	I1031 23:45:33.877835   14921 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 23:45:33.877843   14921 main.go:141] libmachine: Making call to close driver server
	I1031 23:45:33.877845   14921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.794761447s)
	I1031 23:45:33.877851   14921 main.go:141] libmachine: (addons-798361) Calling .Close
	I1031 23:45:33.877861   14921 main.go:141] libmachine: Making call to close driver server
	I1031 23:45:33.877870   14921 main.go:141] libmachine: (addons-798361) Calling .Close
	I1031 23:45:33.877934   14921 main.go:141] libmachine: (addons-798361) DBG | Closing plugin on server side
	I1031 23:45:33.877963   14921 main.go:141] libmachine: (addons-798361) DBG | Closing plugin on server side
	I1031 23:45:33.877982   14921 main.go:141] libmachine: Successfully made call to close driver server
	I1031 23:45:33.877985   14921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.701583776s)
	I1031 23:45:33.877990   14921 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 23:45:33.878002   14921 main.go:141] libmachine: Making call to close driver server
	I1031 23:45:33.878003   14921 main.go:141] libmachine: Making call to close driver server
	I1031 23:45:33.878010   14921 main.go:141] libmachine: (addons-798361) Calling .Close
	I1031 23:45:33.878035   14921 main.go:141] libmachine: (addons-798361) DBG | Closing plugin on server side
	I1031 23:45:33.878012   14921 main.go:141] libmachine: (addons-798361) Calling .Close
	I1031 23:45:33.878059   14921 main.go:141] libmachine: Successfully made call to close driver server
	I1031 23:45:33.878068   14921 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 23:45:33.881512   14921 addons.go:467] Verifying addon ingress=true in "addons-798361"
	I1031 23:45:33.883961   14921 out.go:177] * Verifying ingress addon...
	I1031 23:45:33.881822   14921 main.go:141] libmachine: (addons-798361) DBG | Closing plugin on server side
	I1031 23:45:33.881836   14921 main.go:141] libmachine: (addons-798361) DBG | Closing plugin on server side
	I1031 23:45:33.881859   14921 main.go:141] libmachine: Successfully made call to close driver server
	I1031 23:45:33.881871   14921 main.go:141] libmachine: (addons-798361) DBG | Closing plugin on server side
	I1031 23:45:33.881882   14921 main.go:141] libmachine: (addons-798361) DBG | Closing plugin on server side
	I1031 23:45:33.881892   14921 main.go:141] libmachine: (addons-798361) DBG | Closing plugin on server side
	I1031 23:45:33.881910   14921 main.go:141] libmachine: Successfully made call to close driver server
	I1031 23:45:33.881920   14921 main.go:141] libmachine: Successfully made call to close driver server
	I1031 23:45:33.881930   14921 main.go:141] libmachine: Successfully made call to close driver server
	I1031 23:45:33.881941   14921 main.go:141] libmachine: (addons-798361) DBG | Closing plugin on server side
	I1031 23:45:33.881955   14921 main.go:141] libmachine: Successfully made call to close driver server
	I1031 23:45:33.881965   14921 main.go:141] libmachine: Successfully made call to close driver server
	I1031 23:45:33.881975   14921 main.go:141] libmachine: Successfully made call to close driver server
	I1031 23:45:33.882158   14921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.589453702s)
	I1031 23:45:33.882288   14921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.358276311s)
	I1031 23:45:33.885629   14921 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 23:45:33.885658   14921 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 23:45:33.885662   14921 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 23:45:33.885680   14921 main.go:141] libmachine: Making call to close driver server
	I1031 23:45:33.885696   14921 main.go:141] libmachine: (addons-798361) Calling .Close
	I1031 23:45:33.885701   14921 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 23:45:33.885708   14921 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 23:45:33.885729   14921 main.go:141] libmachine: Making call to close driver server
	I1031 23:45:33.885742   14921 main.go:141] libmachine: (addons-798361) Calling .Close
	I1031 23:45:33.885756   14921 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 23:45:33.885767   14921 main.go:141] libmachine: Making call to close driver server
	I1031 23:45:33.885776   14921 main.go:141] libmachine: (addons-798361) Calling .Close
	I1031 23:45:33.885668   14921 main.go:141] libmachine: Making call to close driver server
	I1031 23:45:33.885799   14921 main.go:141] libmachine: (addons-798361) Calling .Close
	I1031 23:45:33.885826   14921 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 23:45:33.885643   14921 main.go:141] libmachine: Making call to close driver server
	I1031 23:45:33.885847   14921 main.go:141] libmachine: (addons-798361) Calling .Close
	W1031 23:45:33.886122   14921 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1031 23:45:33.886149   14921 retry.go:31] will retry after 188.533448ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1031 23:45:33.886229   14921 main.go:141] libmachine: (addons-798361) DBG | Closing plugin on server side
	I1031 23:45:33.886248   14921 main.go:141] libmachine: Successfully made call to close driver server
	I1031 23:45:33.886258   14921 main.go:141] libmachine: Successfully made call to close driver server
	I1031 23:45:33.886262   14921 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 23:45:33.886266   14921 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 23:45:33.886273   14921 addons.go:467] Verifying addon registry=true in "addons-798361"
	I1031 23:45:33.886277   14921 main.go:141] libmachine: Successfully made call to close driver server
	I1031 23:45:33.886288   14921 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 23:45:33.886295   14921 addons.go:467] Verifying addon metrics-server=true in "addons-798361"
	I1031 23:45:33.886317   14921 main.go:141] libmachine: (addons-798361) DBG | Closing plugin on server side
	I1031 23:45:33.887835   14921 out.go:177] * Verifying registry addon...
	I1031 23:45:33.886336   14921 main.go:141] libmachine: Successfully made call to close driver server
	I1031 23:45:33.886356   14921 main.go:141] libmachine: (addons-798361) DBG | Closing plugin on server side
	I1031 23:45:33.886248   14921 main.go:141] libmachine: (addons-798361) DBG | Closing plugin on server side
	I1031 23:45:33.886367   14921 main.go:141] libmachine: (addons-798361) DBG | Closing plugin on server side
	I1031 23:45:33.886385   14921 main.go:141] libmachine: Successfully made call to close driver server
	I1031 23:45:33.886593   14921 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1031 23:45:33.889189   14921 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 23:45:33.889236   14921 main.go:141] libmachine: Making call to close driver server
	I1031 23:45:33.889259   14921 main.go:141] libmachine: (addons-798361) Calling .Close
	I1031 23:45:33.889209   14921 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 23:45:33.889565   14921 main.go:141] libmachine: Successfully made call to close driver server
	I1031 23:45:33.889603   14921 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 23:45:33.890019   14921 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1031 23:45:33.909607   14921 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1031 23:45:33.909633   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:33.909670   14921 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1031 23:45:33.909692   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:33.922763   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:33.926774   14921 main.go:141] libmachine: Making call to close driver server
	I1031 23:45:33.926797   14921 main.go:141] libmachine: (addons-798361) Calling .Close
	I1031 23:45:33.926897   14921 main.go:141] libmachine: Making call to close driver server
	I1031 23:45:33.926917   14921 main.go:141] libmachine: (addons-798361) Calling .Close
	I1031 23:45:33.927193   14921 main.go:141] libmachine: Successfully made call to close driver server
	I1031 23:45:33.927214   14921 main.go:141] libmachine: Making call to close connection to plugin binary
	W1031 23:45:33.927303   14921 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1031 23:45:33.928040   14921 main.go:141] libmachine: (addons-798361) DBG | Closing plugin on server side
	I1031 23:45:33.928074   14921 main.go:141] libmachine: Successfully made call to close driver server
	I1031 23:45:33.928096   14921 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 23:45:33.930690   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:34.056742   14921 pod_ready.go:102] pod "coredns-5dd5756b68-mxmdx" in "kube-system" namespace has status "Ready":"False"
	I1031 23:45:34.075257   14921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1031 23:45:34.444804   14921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.659350353s)
	I1031 23:45:34.444847   14921 main.go:141] libmachine: Making call to close driver server
	I1031 23:45:34.444855   14921 main.go:141] libmachine: (addons-798361) Calling .Close
	I1031 23:45:34.444868   14921 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.073074873s)
	I1031 23:45:34.446625   14921 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1031 23:45:34.445176   14921 main.go:141] libmachine: (addons-798361) DBG | Closing plugin on server side
	I1031 23:45:34.445205   14921 main.go:141] libmachine: Successfully made call to close driver server
	I1031 23:45:34.447980   14921 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 23:45:34.447995   14921 main.go:141] libmachine: Making call to close driver server
	I1031 23:45:34.448008   14921 main.go:141] libmachine: (addons-798361) Calling .Close
	I1031 23:45:34.449543   14921 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1031 23:45:34.448322   14921 main.go:141] libmachine: (addons-798361) DBG | Closing plugin on server side
	I1031 23:45:34.450856   14921 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1031 23:45:34.450869   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1031 23:45:34.448354   14921 main.go:141] libmachine: Successfully made call to close driver server
	I1031 23:45:34.450936   14921 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 23:45:34.450958   14921 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-798361"
	I1031 23:45:34.452534   14921 out.go:177] * Verifying csi-hostpath-driver addon...
	I1031 23:45:34.454500   14921 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1031 23:45:34.480572   14921 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1031 23:45:34.480594   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1031 23:45:34.483445   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:34.483914   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:34.509506   14921 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1031 23:45:34.509532   14921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1031 23:45:34.540185   14921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1031 23:45:34.552817   14921 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1031 23:45:34.552836   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:34.643097   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:34.959417   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:34.959741   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:35.158292   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:35.427750   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:35.439409   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:35.649146   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:35.934218   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:35.941890   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:36.160193   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:36.414254   14921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.338919777s)
	I1031 23:45:36.414310   14921 main.go:141] libmachine: Making call to close driver server
	I1031 23:45:36.414324   14921 main.go:141] libmachine: (addons-798361) Calling .Close
	I1031 23:45:36.414590   14921 main.go:141] libmachine: Successfully made call to close driver server
	I1031 23:45:36.414645   14921 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 23:45:36.414667   14921 main.go:141] libmachine: Making call to close driver server
	I1031 23:45:36.414667   14921 main.go:141] libmachine: (addons-798361) DBG | Closing plugin on server side
	I1031 23:45:36.414677   14921 main.go:141] libmachine: (addons-798361) Calling .Close
	I1031 23:45:36.414955   14921 main.go:141] libmachine: Successfully made call to close driver server
	I1031 23:45:36.414971   14921 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 23:45:36.414993   14921 main.go:141] libmachine: (addons-798361) DBG | Closing plugin on server side
	I1031 23:45:36.445947   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:36.446123   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:36.596589   14921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.056364402s)
	I1031 23:45:36.596642   14921 main.go:141] libmachine: Making call to close driver server
	I1031 23:45:36.596654   14921 main.go:141] libmachine: (addons-798361) Calling .Close
	I1031 23:45:36.597018   14921 main.go:141] libmachine: (addons-798361) DBG | Closing plugin on server side
	I1031 23:45:36.597071   14921 main.go:141] libmachine: Successfully made call to close driver server
	I1031 23:45:36.597089   14921 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 23:45:36.597108   14921 main.go:141] libmachine: Making call to close driver server
	I1031 23:45:36.597121   14921 main.go:141] libmachine: (addons-798361) Calling .Close
	I1031 23:45:36.597348   14921 main.go:141] libmachine: Successfully made call to close driver server
	I1031 23:45:36.597372   14921 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 23:45:36.599398   14921 addons.go:467] Verifying addon gcp-auth=true in "addons-798361"
	I1031 23:45:36.601042   14921 out.go:177] * Verifying gcp-auth addon...
	I1031 23:45:36.601732   14921 pod_ready.go:102] pod "coredns-5dd5756b68-mxmdx" in "kube-system" namespace has status "Ready":"False"
	I1031 23:45:36.602896   14921 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1031 23:45:36.615695   14921 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1031 23:45:36.615718   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:36.632567   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:36.681254   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:36.927799   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:36.940771   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:37.147475   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:37.158958   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:37.442106   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:37.450193   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:37.650447   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:37.654410   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:37.940538   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:37.940665   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:38.137432   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:38.150314   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:38.431333   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:38.435097   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:38.639408   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:38.650111   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:38.928620   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:38.935088   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:39.044667   14921 pod_ready.go:102] pod "coredns-5dd5756b68-mxmdx" in "kube-system" namespace has status "Ready":"False"
	I1031 23:45:39.137629   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:39.148912   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:39.429506   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:39.436336   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:39.636487   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:39.647869   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:39.927857   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:39.935161   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:40.137520   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:40.147789   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:40.427796   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:40.435218   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:40.637512   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:40.652639   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:40.929660   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:40.935048   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:41.136230   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:41.150643   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:41.427742   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:41.434962   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:41.548896   14921 pod_ready.go:102] pod "coredns-5dd5756b68-mxmdx" in "kube-system" namespace has status "Ready":"False"
	I1031 23:45:41.639862   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:41.652641   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:41.948505   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:41.963672   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:42.137813   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:42.153347   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:42.440035   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:42.441689   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:42.636890   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:42.649702   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:42.928910   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:42.938510   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:43.137604   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:43.150448   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:43.434279   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:43.441620   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:43.552526   14921 pod_ready.go:102] pod "coredns-5dd5756b68-mxmdx" in "kube-system" namespace has status "Ready":"False"
	I1031 23:45:43.644881   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:43.649143   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:43.927837   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:43.943922   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:44.136617   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:44.151990   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:44.441690   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:44.443995   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:44.639059   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:44.656367   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:44.927598   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:44.939194   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:45.139440   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:45.150019   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:45.434725   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:45.438009   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:45.575786   14921 pod_ready.go:102] pod "coredns-5dd5756b68-mxmdx" in "kube-system" namespace has status "Ready":"False"
	I1031 23:45:45.637794   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:45.648905   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:45.928372   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:45.934662   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:46.136442   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:46.148336   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:46.427025   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:46.435751   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:46.637175   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:46.651707   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:47.052983   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:47.053501   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:47.136671   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:47.148203   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:47.427690   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:47.435016   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:47.636555   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:47.648780   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:47.928572   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:47.935786   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:48.045280   14921 pod_ready.go:102] pod "coredns-5dd5756b68-mxmdx" in "kube-system" namespace has status "Ready":"False"
	I1031 23:45:48.136902   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:48.148322   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:48.427755   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:48.435414   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:48.636149   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:48.648723   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:48.927748   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:48.936100   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:49.136780   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:49.148115   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:49.428785   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:49.435686   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:49.637425   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:49.648107   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:49.928379   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:49.934993   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:50.138920   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:50.159214   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:50.428079   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:50.435548   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:50.547072   14921 pod_ready.go:102] pod "coredns-5dd5756b68-mxmdx" in "kube-system" namespace has status "Ready":"False"
	I1031 23:45:50.636594   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:50.650726   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:50.928234   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:50.935017   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:51.136446   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:51.154033   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:51.429231   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:51.434786   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:51.638300   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:51.649722   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:51.927659   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:51.935435   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:52.136430   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:52.148428   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:52.428129   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:52.436475   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:52.636296   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:52.649214   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:52.928471   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:52.940521   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:53.043996   14921 pod_ready.go:102] pod "coredns-5dd5756b68-mxmdx" in "kube-system" namespace has status "Ready":"False"
	I1031 23:45:53.136892   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:53.148962   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:53.428677   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:53.436131   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:53.639840   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:53.648900   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:53.929836   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:53.935418   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:54.137140   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:54.149350   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:54.428159   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:54.435744   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:54.636636   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:54.651335   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:54.928766   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:54.935349   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:55.045379   14921 pod_ready.go:102] pod "coredns-5dd5756b68-mxmdx" in "kube-system" namespace has status "Ready":"False"
	I1031 23:45:55.137353   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:55.150032   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:55.428840   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:55.435992   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:55.636331   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:55.649759   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:55.928120   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:55.935796   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:56.137034   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:56.148877   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:56.428178   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:56.435388   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:56.637712   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:56.648320   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:56.928227   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:56.935795   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:57.136979   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:57.149046   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:57.427713   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:57.435390   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:57.544268   14921 pod_ready.go:102] pod "coredns-5dd5756b68-mxmdx" in "kube-system" namespace has status "Ready":"False"
	I1031 23:45:57.636790   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:57.651760   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:57.928960   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:57.936081   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:58.136656   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:58.149290   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:58.428973   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:58.435667   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:58.636946   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:58.650790   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:58.927725   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:58.935775   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:59.137103   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:59.149369   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:59.427926   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:59.435885   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:45:59.548145   14921 pod_ready.go:102] pod "coredns-5dd5756b68-mxmdx" in "kube-system" namespace has status "Ready":"False"
	I1031 23:45:59.637091   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:45:59.649139   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:45:59.928593   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:45:59.935388   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:00.136707   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:00.149111   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:00.427648   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:00.435118   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:00.638127   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:00.648792   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:00.929001   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:00.936485   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:01.136677   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:01.157107   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:01.428332   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:01.437226   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:01.548614   14921 pod_ready.go:102] pod "coredns-5dd5756b68-mxmdx" in "kube-system" namespace has status "Ready":"False"
	I1031 23:46:01.637386   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:01.649012   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:01.928532   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:01.934674   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:02.136656   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:02.148839   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:02.428387   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:02.434940   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:02.636273   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:02.649826   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:02.928887   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:02.936002   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:03.136456   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:03.147906   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:03.429518   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:03.436038   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:03.645272   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:03.655831   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:03.928698   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:03.935867   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:04.044656   14921 pod_ready.go:102] pod "coredns-5dd5756b68-mxmdx" in "kube-system" namespace has status "Ready":"False"
	I1031 23:46:04.137590   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:04.148753   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:04.428024   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:04.435695   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:04.637020   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:04.649312   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:04.929226   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:04.934706   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:05.137707   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:05.149043   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:05.430427   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:05.435523   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:05.636878   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:05.648868   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:05.927730   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:05.935399   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:06.045630   14921 pod_ready.go:102] pod "coredns-5dd5756b68-mxmdx" in "kube-system" namespace has status "Ready":"False"
	I1031 23:46:06.137019   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:06.149101   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:06.428049   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:06.435841   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:06.637214   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:06.648645   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:07.285996   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:07.286715   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:07.288119   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:07.288989   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:07.291112   14921 pod_ready.go:92] pod "coredns-5dd5756b68-mxmdx" in "kube-system" namespace has status "Ready":"True"
	I1031 23:46:07.291130   14921 pod_ready.go:81] duration metric: took 37.661261651s waiting for pod "coredns-5dd5756b68-mxmdx" in "kube-system" namespace to be "Ready" ...
	I1031 23:46:07.291141   14921 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-798361" in "kube-system" namespace to be "Ready" ...
	I1031 23:46:07.296868   14921 pod_ready.go:92] pod "etcd-addons-798361" in "kube-system" namespace has status "Ready":"True"
	I1031 23:46:07.296889   14921 pod_ready.go:81] duration metric: took 5.741096ms waiting for pod "etcd-addons-798361" in "kube-system" namespace to be "Ready" ...
	I1031 23:46:07.296900   14921 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-798361" in "kube-system" namespace to be "Ready" ...
	I1031 23:46:07.301994   14921 pod_ready.go:92] pod "kube-apiserver-addons-798361" in "kube-system" namespace has status "Ready":"True"
	I1031 23:46:07.302013   14921 pod_ready.go:81] duration metric: took 5.105228ms waiting for pod "kube-apiserver-addons-798361" in "kube-system" namespace to be "Ready" ...
	I1031 23:46:07.302025   14921 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-798361" in "kube-system" namespace to be "Ready" ...
	I1031 23:46:07.306760   14921 pod_ready.go:92] pod "kube-controller-manager-addons-798361" in "kube-system" namespace has status "Ready":"True"
	I1031 23:46:07.306780   14921 pod_ready.go:81] duration metric: took 4.748841ms waiting for pod "kube-controller-manager-addons-798361" in "kube-system" namespace to be "Ready" ...
	I1031 23:46:07.306789   14921 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-scpgx" in "kube-system" namespace to be "Ready" ...
	I1031 23:46:07.312109   14921 pod_ready.go:92] pod "kube-proxy-scpgx" in "kube-system" namespace has status "Ready":"True"
	I1031 23:46:07.312126   14921 pod_ready.go:81] duration metric: took 5.332345ms waiting for pod "kube-proxy-scpgx" in "kube-system" namespace to be "Ready" ...
	I1031 23:46:07.312135   14921 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-798361" in "kube-system" namespace to be "Ready" ...
	I1031 23:46:07.428414   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:07.434929   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:07.490811   14921 pod_ready.go:92] pod "kube-scheduler-addons-798361" in "kube-system" namespace has status "Ready":"True"
	I1031 23:46:07.490839   14921 pod_ready.go:81] duration metric: took 178.697722ms waiting for pod "kube-scheduler-addons-798361" in "kube-system" namespace to be "Ready" ...
	I1031 23:46:07.490847   14921 pod_ready.go:38] duration metric: took 41.936113658s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 23:46:07.490863   14921 api_server.go:52] waiting for apiserver process to appear ...
	I1031 23:46:07.490908   14921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 23:46:07.528781   14921 api_server.go:72] duration metric: took 42.181111728s to wait for apiserver process to appear ...
	I1031 23:46:07.528804   14921 api_server.go:88] waiting for apiserver healthz status ...
	I1031 23:46:07.528824   14921 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8443/healthz ...
	I1031 23:46:07.533691   14921 api_server.go:279] https://192.168.39.214:8443/healthz returned 200:
	ok
	I1031 23:46:07.534961   14921 api_server.go:141] control plane version: v1.28.3
	I1031 23:46:07.534981   14921 api_server.go:131] duration metric: took 6.171649ms to wait for apiserver health ...
	I1031 23:46:07.534989   14921 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 23:46:07.636522   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:07.649761   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:07.695713   14921 system_pods.go:59] 18 kube-system pods found
	I1031 23:46:07.695751   14921 system_pods.go:61] "coredns-5dd5756b68-mxmdx" [bdce0e30-4f2a-405c-a37c-3dd5009e5544] Running
	I1031 23:46:07.695757   14921 system_pods.go:61] "csi-hostpath-attacher-0" [1379f997-c7bc-48f1-aa9d-9ae4a2da90ee] Running
	I1031 23:46:07.695761   14921 system_pods.go:61] "csi-hostpath-resizer-0" [bcbc78e9-ed71-4303-a11f-448f20aeeba8] Running
	I1031 23:46:07.695768   14921 system_pods.go:61] "csi-hostpathplugin-w4wn6" [d236b3b0-387b-4f05-bae4-caa66704dd80] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1031 23:46:07.695774   14921 system_pods.go:61] "etcd-addons-798361" [4e4efb21-714b-4c7d-95c7-58f8bceae766] Running
	I1031 23:46:07.695781   14921 system_pods.go:61] "kube-apiserver-addons-798361" [0102d64f-a7f6-4006-9e31-9ea76259e1d8] Running
	I1031 23:46:07.695785   14921 system_pods.go:61] "kube-controller-manager-addons-798361" [7bb2f7d8-d198-4b23-a71f-3253d559b560] Running
	I1031 23:46:07.695789   14921 system_pods.go:61] "kube-ingress-dns-minikube" [6db8a83e-76e9-449e-b3e2-c9bc0acf077f] Running
	I1031 23:46:07.695793   14921 system_pods.go:61] "kube-proxy-scpgx" [ede10144-6a51-452e-b32d-eef8b938bacd] Running
	I1031 23:46:07.695797   14921 system_pods.go:61] "kube-scheduler-addons-798361" [a1062431-1ec9-4680-bf2f-f1e655251af7] Running
	I1031 23:46:07.695802   14921 system_pods.go:61] "metrics-server-7c66d45ddc-zzhks" [d8225a7f-92d4-400f-83e4-12260eae77aa] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 23:46:07.695806   14921 system_pods.go:61] "nvidia-device-plugin-daemonset-lvpgt" [9c45c998-76b8-4253-9b15-cd3a9d7756be] Running
	I1031 23:46:07.695812   14921 system_pods.go:61] "registry-proxy-b44rj" [6381e896-06e3-4249-96d2-436fd28a088d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1031 23:46:07.695818   14921 system_pods.go:61] "registry-z954q" [bdbe9b30-2dde-43e5-a3b9-d5747f4c16ab] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1031 23:46:07.695824   14921 system_pods.go:61] "snapshot-controller-58dbcc7b99-8bltb" [b7611427-c24e-4c66-9575-2fb8272993cd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1031 23:46:07.695831   14921 system_pods.go:61] "snapshot-controller-58dbcc7b99-psqz8" [258033be-2ce4-4c7d-8b59-2dd0aa7cc2a8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1031 23:46:07.695836   14921 system_pods.go:61] "storage-provisioner" [506fd78c-7afe-46af-90fc-c1cf59f5aa05] Running
	I1031 23:46:07.695844   14921 system_pods.go:61] "tiller-deploy-7b677967b9-m2w9s" [67f715dc-230b-49dc-8a07-bd8b3586a4cf] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1031 23:46:07.695851   14921 system_pods.go:74] duration metric: took 160.857142ms to wait for pod list to return data ...
	I1031 23:46:07.695873   14921 default_sa.go:34] waiting for default service account to be created ...
	I1031 23:46:07.890285   14921 default_sa.go:45] found service account: "default"
	I1031 23:46:07.890307   14921 default_sa.go:55] duration metric: took 194.428719ms for default service account to be created ...
	I1031 23:46:07.890315   14921 system_pods.go:116] waiting for k8s-apps to be running ...
	I1031 23:46:07.927520   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:07.935746   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:08.098324   14921 system_pods.go:86] 18 kube-system pods found
	I1031 23:46:08.098354   14921 system_pods.go:89] "coredns-5dd5756b68-mxmdx" [bdce0e30-4f2a-405c-a37c-3dd5009e5544] Running
	I1031 23:46:08.098359   14921 system_pods.go:89] "csi-hostpath-attacher-0" [1379f997-c7bc-48f1-aa9d-9ae4a2da90ee] Running
	I1031 23:46:08.098363   14921 system_pods.go:89] "csi-hostpath-resizer-0" [bcbc78e9-ed71-4303-a11f-448f20aeeba8] Running
	I1031 23:46:08.098370   14921 system_pods.go:89] "csi-hostpathplugin-w4wn6" [d236b3b0-387b-4f05-bae4-caa66704dd80] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1031 23:46:08.098375   14921 system_pods.go:89] "etcd-addons-798361" [4e4efb21-714b-4c7d-95c7-58f8bceae766] Running
	I1031 23:46:08.098380   14921 system_pods.go:89] "kube-apiserver-addons-798361" [0102d64f-a7f6-4006-9e31-9ea76259e1d8] Running
	I1031 23:46:08.098384   14921 system_pods.go:89] "kube-controller-manager-addons-798361" [7bb2f7d8-d198-4b23-a71f-3253d559b560] Running
	I1031 23:46:08.098388   14921 system_pods.go:89] "kube-ingress-dns-minikube" [6db8a83e-76e9-449e-b3e2-c9bc0acf077f] Running
	I1031 23:46:08.098392   14921 system_pods.go:89] "kube-proxy-scpgx" [ede10144-6a51-452e-b32d-eef8b938bacd] Running
	I1031 23:46:08.098397   14921 system_pods.go:89] "kube-scheduler-addons-798361" [a1062431-1ec9-4680-bf2f-f1e655251af7] Running
	I1031 23:46:08.098402   14921 system_pods.go:89] "metrics-server-7c66d45ddc-zzhks" [d8225a7f-92d4-400f-83e4-12260eae77aa] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 23:46:08.098406   14921 system_pods.go:89] "nvidia-device-plugin-daemonset-lvpgt" [9c45c998-76b8-4253-9b15-cd3a9d7756be] Running
	I1031 23:46:08.098412   14921 system_pods.go:89] "registry-proxy-b44rj" [6381e896-06e3-4249-96d2-436fd28a088d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1031 23:46:08.098417   14921 system_pods.go:89] "registry-z954q" [bdbe9b30-2dde-43e5-a3b9-d5747f4c16ab] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1031 23:46:08.098423   14921 system_pods.go:89] "snapshot-controller-58dbcc7b99-8bltb" [b7611427-c24e-4c66-9575-2fb8272993cd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1031 23:46:08.098429   14921 system_pods.go:89] "snapshot-controller-58dbcc7b99-psqz8" [258033be-2ce4-4c7d-8b59-2dd0aa7cc2a8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1031 23:46:08.098437   14921 system_pods.go:89] "storage-provisioner" [506fd78c-7afe-46af-90fc-c1cf59f5aa05] Running
	I1031 23:46:08.098442   14921 system_pods.go:89] "tiller-deploy-7b677967b9-m2w9s" [67f715dc-230b-49dc-8a07-bd8b3586a4cf] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1031 23:46:08.098449   14921 system_pods.go:126] duration metric: took 208.129531ms to wait for k8s-apps to be running ...
	I1031 23:46:08.098456   14921 system_svc.go:44] waiting for kubelet service to be running ....
	I1031 23:46:08.098496   14921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 23:46:08.118398   14921 system_svc.go:56] duration metric: took 19.9331ms WaitForService to wait for kubelet.
	I1031 23:46:08.118424   14921 kubeadm.go:581] duration metric: took 42.770760985s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1031 23:46:08.118441   14921 node_conditions.go:102] verifying NodePressure condition ...
	I1031 23:46:08.136939   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:08.148861   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:08.290969   14921 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 23:46:08.291008   14921 node_conditions.go:123] node cpu capacity is 2
	I1031 23:46:08.291032   14921 node_conditions.go:105] duration metric: took 172.586684ms to run NodePressure ...
	I1031 23:46:08.291046   14921 start.go:228] waiting for startup goroutines ...
	I1031 23:46:08.451704   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:08.452337   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:08.637022   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:08.648568   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:08.927547   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:08.935092   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:09.137019   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:09.149310   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:09.427436   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:09.436841   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:09.637444   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:09.649406   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:09.936292   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:09.943131   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:10.136942   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:10.151034   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:10.430078   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:10.436955   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:10.637683   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:10.649333   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:10.928583   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:10.936265   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:11.137485   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:11.148584   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:11.428325   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:11.435309   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:11.637529   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:11.648708   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:11.928466   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:11.943836   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:12.137326   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:12.150609   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:12.436135   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:12.436642   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:12.637137   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:12.650959   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:12.962200   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:12.965424   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:13.136871   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:13.151248   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:13.482069   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:13.482185   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:13.644236   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:13.650099   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:13.927899   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:13.939191   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:14.136846   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:14.160609   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:14.427178   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:14.435387   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:14.637388   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:14.668797   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:14.935667   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:14.941925   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:15.136773   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:15.149750   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:15.427404   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:15.434886   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:15.637384   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:15.650161   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:15.928389   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:15.938430   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:16.138407   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:16.149193   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:16.428193   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:16.435759   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:16.637042   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:16.649295   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:16.929341   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:16.935351   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:17.137330   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:17.149363   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:17.427851   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:17.436962   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:17.637285   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:17.649337   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:17.929292   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:17.936234   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:18.137288   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:18.149930   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:18.428094   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:18.436319   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:18.638536   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:18.653312   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:19.127368   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:19.128790   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:19.136757   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:19.149773   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:19.428269   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:19.434524   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:19.635943   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:19.648608   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:19.927265   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:19.934683   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:20.136199   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:20.148981   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:20.428657   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:20.437361   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:20.638822   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:20.648206   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:20.933668   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:20.935763   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:21.141964   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:21.148538   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:21.428247   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:21.434758   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:21.638471   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:21.647816   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:21.928319   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:21.934879   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:22.136809   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:22.151642   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:22.427963   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:22.436497   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:22.636967   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:22.648753   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:22.928824   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:22.944620   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:23.136469   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:23.149546   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:23.428206   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:23.436102   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:23.643638   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:23.650584   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:23.927968   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:23.935486   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:24.136656   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:24.148554   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:24.430852   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:24.436038   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:24.636924   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:24.649148   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:24.928008   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:24.936816   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:25.139942   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:25.150564   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:25.428404   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:25.434984   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:25.637315   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:25.648922   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:25.929003   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:25.936142   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:26.137546   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:26.148928   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:26.572593   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:26.573412   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:26.637806   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:26.653615   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:26.928516   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:26.938492   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:27.137689   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:27.149459   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:27.428136   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:27.434914   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:27.636204   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:27.649370   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:27.928344   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:27.935690   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:28.138701   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:28.168232   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:28.432202   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:28.439551   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:28.636970   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:28.649027   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:28.928517   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:28.934751   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:29.137280   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:29.149667   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:29.428103   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:29.435670   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:29.636927   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:29.652993   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:29.928820   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:29.935026   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:30.137093   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:30.148927   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:30.428075   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:30.445145   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:30.638059   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:30.650260   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:30.933468   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1031 23:46:30.940538   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:31.138481   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:31.152067   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:31.427619   14921 kapi.go:107] duration metric: took 57.537595644s to wait for kubernetes.io/minikube-addons=registry ...
	I1031 23:46:31.438221   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:31.637403   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:31.650022   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:31.935580   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:32.137296   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:32.150361   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:32.436418   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:32.637420   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:32.648448   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:32.935791   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:33.139072   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:33.154451   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:33.437332   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:33.641466   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:33.648299   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:33.935164   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:34.142847   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:34.156043   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:34.435361   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:34.636894   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:34.651101   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:34.936512   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:35.136753   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:35.149157   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:35.435959   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:35.637305   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:35.650150   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:35.938119   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:36.137155   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:36.149106   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:36.436318   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:36.636377   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:36.648099   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:36.936024   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:37.137267   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:37.149962   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:37.436183   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:37.638869   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:37.653108   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:37.935926   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:38.136992   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:38.152729   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:38.436015   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:38.646209   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:38.649896   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:38.935827   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:39.136959   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:39.149622   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:39.436255   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:39.638599   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:39.656682   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:39.935598   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:40.299732   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:40.300948   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:40.436870   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:40.637192   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:40.648891   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:40.936182   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:41.136972   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:41.148868   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:41.435455   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:41.640649   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:41.648578   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:41.943745   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:42.146295   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:42.154336   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:42.437039   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:42.636924   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:42.649735   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:42.935090   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:43.136835   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:43.154427   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:43.459988   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:43.641581   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:43.652519   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:43.936661   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:44.137474   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:44.149703   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:44.435586   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:44.636938   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:44.649742   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:44.935998   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:45.140382   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:45.159917   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:45.440419   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:45.639034   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:45.649498   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:45.936850   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:46.140517   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:46.158938   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:46.438975   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:46.637029   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:46.648430   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:46.941369   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:47.136012   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:47.149169   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:47.514390   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:47.646974   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:47.651607   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:47.936244   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:48.137138   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:48.150691   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:48.435617   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:48.638276   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:48.653115   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:48.936685   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:49.137394   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:49.155528   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:49.435727   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:49.636736   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:49.652774   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:49.935120   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:50.137663   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:50.148675   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:50.436575   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:50.637430   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:50.650626   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:50.934919   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:51.137429   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:51.148270   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:51.436271   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:51.636678   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:51.649193   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:51.937518   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:52.137832   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:52.149831   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1031 23:46:52.435650   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:52.637383   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:52.667292   14921 kapi.go:107] duration metric: took 1m18.212785974s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1031 23:46:52.937143   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:53.137040   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:53.435683   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:53.653462   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:53.936057   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:54.136624   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:54.435993   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:54.637432   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:54.935889   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:55.137301   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:55.435153   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:55.636700   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:55.936320   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:56.136516   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:56.436953   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:56.638001   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:56.936100   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:57.137698   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:57.437028   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:57.636597   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:57.936300   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:58.136888   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:58.435060   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:58.636908   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:58.935688   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:59.137063   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:59.435517   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:46:59.636644   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:46:59.936746   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:00.137063   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:00.435863   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:00.637013   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:00.935705   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:01.138265   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:01.435881   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:01.637579   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:01.936904   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:02.138660   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:02.436517   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:02.637836   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:02.935461   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:03.137425   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:03.438862   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:03.637232   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:03.935986   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:04.137775   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:04.435040   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:04.637560   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:04.936150   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:05.137883   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:05.436319   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:05.636679   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:05.936359   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:06.138080   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:06.436069   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:06.637557   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:06.936275   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:07.137361   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:07.435247   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:07.636865   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:07.935656   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:08.136937   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:08.436043   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:08.636666   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:08.937662   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:09.137299   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:09.435986   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:09.638214   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:09.935747   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:10.136187   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:10.435337   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:10.637003   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:10.935356   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:11.136642   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:11.436507   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:11.636773   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:11.935305   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:12.136783   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:12.436160   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:12.636698   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:12.938045   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:13.136031   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:13.436169   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:13.638725   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:13.936468   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:14.136230   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:14.435439   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:14.636792   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:14.935626   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:15.136907   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:15.436224   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:15.636196   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:15.935619   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:16.137213   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:16.436197   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:16.637417   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:16.936157   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:17.137370   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:17.435177   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:17.636313   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:17.936128   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:18.136333   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:18.437151   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:18.637343   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:18.936612   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:19.137365   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:19.436324   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:19.636675   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:19.935868   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:20.137434   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:20.436674   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:20.636880   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:20.936539   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:21.137291   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:21.436717   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:21.636968   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:21.935731   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:22.136823   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:22.435266   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:22.636564   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:22.936413   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:23.136835   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:23.435895   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:23.641194   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:24.002938   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:24.137458   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:24.436538   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:24.642938   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:24.936652   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:25.137781   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:25.435805   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:25.636849   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:25.935288   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:26.136498   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:26.436310   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:26.636467   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:26.936523   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:27.136939   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:27.436579   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:27.637918   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:27.936081   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:28.138432   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:28.438776   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:28.636387   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:28.935618   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:29.137843   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:29.435627   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:29.636965   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:29.936549   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:30.136191   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:30.436068   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:30.637201   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:30.936990   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:31.137358   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:31.436167   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:31.636496   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:31.937303   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:32.137316   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:32.436726   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:32.637427   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:32.937660   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:33.137087   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:33.440110   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:33.637392   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:33.936122   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:34.136547   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:34.437071   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:34.636692   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:34.934883   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:35.137348   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:35.435861   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:35.637279   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:35.935988   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:36.142024   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:36.435383   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:36.636403   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:36.935880   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:37.137044   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:37.436259   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:37.637413   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:37.936177   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:38.136553   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:38.436336   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:38.637554   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:38.937255   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:39.137134   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:39.436184   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:39.637291   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:39.935880   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:40.136567   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:40.436650   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:40.636566   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:40.936698   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:41.136314   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:41.436521   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:41.636927   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:41.935568   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:42.137214   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:42.436055   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:42.636093   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:42.936467   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:43.137624   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:43.438825   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:43.638189   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:43.936333   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:44.136961   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:44.435558   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:44.636196   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:44.935783   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:45.136446   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:45.436285   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:45.636833   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:45.936604   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:46.136246   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:46.436007   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:46.637213   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:46.936623   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:47.137037   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:47.435968   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:47.636281   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:47.936376   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:48.135950   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:48.436642   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:48.636842   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:48.935654   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:49.136961   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:49.435665   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:49.637389   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:49.936616   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:50.139919   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:50.435850   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:50.647670   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:50.935943   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:51.159014   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:51.435911   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:51.639963   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:51.936973   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:52.137172   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:52.437006   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:52.638114   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:52.941562   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:53.136768   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:53.435358   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:53.639372   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:53.936125   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:54.137179   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:54.436349   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:54.636408   14921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1031 23:47:54.959965   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:55.137231   14921 kapi.go:107] duration metric: took 2m18.534333572s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1031 23:47:55.139402   14921 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-798361 cluster.
	I1031 23:47:55.141621   14921 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1031 23:47:55.143492   14921 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1031 23:47:55.437746   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:55.936470   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:56.436949   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:56.936257   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:57.435649   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:57.935425   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:58.566918   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:58.936648   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:59.436054   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:47:59.935964   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:48:00.440375   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:48:00.935477   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:48:01.436966   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:48:01.935455   14921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1031 23:48:02.436132   14921 kapi.go:107] duration metric: took 2m28.5495402s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1031 23:48:02.438203   14921 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, ingress-dns, storage-provisioner, metrics-server, helm-tiller, inspektor-gadget, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I1031 23:48:02.439590   14921 addons.go:502] enable addons completed in 2m37.351957716s: enabled=[cloud-spanner nvidia-device-plugin ingress-dns storage-provisioner metrics-server helm-tiller inspektor-gadget default-storageclass volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I1031 23:48:02.439637   14921 start.go:233] waiting for cluster config update ...
	I1031 23:48:02.439658   14921 start.go:242] writing updated cluster config ...
	I1031 23:48:02.439907   14921 ssh_runner.go:195] Run: rm -f paused
	I1031 23:48:02.491881   14921 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1031 23:48:02.494096   14921 out.go:177] * Done! kubectl is now configured to use "addons-798361" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-10-31 23:44:40 UTC, ends at Tue 2023-10-31 23:51:02 UTC. --
	Oct 31 23:51:01 addons-798361 crio[715]: time="2023-10-31 23:51:01.848373477Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698796261848351411,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:529245,},InodesUsed:&UInt64Value{Value:221,},},},}" file="go-grpc-middleware/chain.go:25" id=d21bc05d-75c2-4f43-956a-d8feeaa27de4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 23:51:01 addons-798361 crio[715]: time="2023-10-31 23:51:01.849467296Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8f72ebb1-0932-47b8-b7e6-0f4aef878cbe name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 23:51:01 addons-798361 crio[715]: time="2023-10-31 23:51:01.849530428Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8f72ebb1-0932-47b8-b7e6-0f4aef878cbe name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 23:51:01 addons-798361 crio[715]: time="2023-10-31 23:51:01.850020873Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd8ba1139203435974fd89ead36904a20100751e05245f6708c940294de3b42c,PodSandboxId:247f9b86b88101ed0c09f08a8303f4cf8e191c6d89b517cd87017d49e3adbdd6,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9f3072b59865e9203c931b3ba3e8ae52e1ca3002cff46ed204bda9d37850420d,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9f3072b59865e9203c931b3ba3e8ae52e1ca3002cff46ed204bda9d37850420d,State:CONTAINER_RUNNING,CreatedAt:1698796254761398238,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-cqnnz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 818c1d3d-8f4e-4481-a761-9c45fd02d5fa,},Annotations:map[string]string{io.kubernetes.container.hash: f218526e,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49fa61dfa90506d75b21237a878fd150683f4967c648106a965093eba089ccc4,PodSandboxId:fdf61859d51e0ce6c1dbf4d4b8cdc5e19896431617c49f6b21f2419720d7eb20,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4,State:CONTAINER_RUNNING,CreatedAt:1698796132329429740,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-94b766c-fzcp5,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: d63e52a3-c7fc-4035-b867-099157e15969,},Annot
ations:map[string]string{io.kubernetes.container.hash: a8721482,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd69bd70d2c83fab04f29ea983fa6d44a2febe0473a2451660e31d32a98e74bb,PodSandboxId:188de3bbe4f768fa0cf8a8d6d1817dbdbc211b371ec210ee152ef2286710e6a8,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1698796113247315745,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io
.kubernetes.pod.uid: 95a3b560-345b-4ce8-aecb-2b42ff0e1ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 7fd6e9bd,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c62294a20df1d6d4f98d1c8f7f7d7e88f25f74559150f54eaafccd9e1c3795a,PodSandboxId:2b050484940cfbf79bcf0a68b8174edfe698f934328647035381c5b869de5131,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1698796074225315761,Labels:map[string]string{io.kubernetes.container.name: g
cp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-qrqv8,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 4a43309c-dcdc-4822-8ca8-e266658cd278,},Annotations:map[string]string{io.kubernetes.container.hash: e700df47,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9137074aa273eabad6a9e29738ac93a9611d458d68b13bcda96856ddc398524a,PodSandboxId:e0b5705af9c0eee50cea7795e4f50cfc9ad93a1a17bd4c93b84762c82e8e84a2,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:16987960060
08018357,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-lzt55,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8d94da0e-b439-4d7d-a977-ba71eb74b3f4,},Annotations:map[string]string{io.kubernetes.container.hash: 6998c5c5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47bb7f64dd213a302756bbf8735daf27b021c2ad5a4569327aa529d520a819c0,PodSandboxId:628a699113376601a440d4df8c6935171d71e12c662889decf3a66d40646080d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b
9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1698796002053625697,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-pd6zk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4fa0ac04-112c-473f-a008-ef4c0231c0c8,},Annotations:map[string]string{io.kubernetes.container.hash: 5265ed9b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62f84de30b61b0b269c8a568ff592b54c9b580b9aef22e31c3b549846f3cc3fe,PodSandboxId:7c52710fbf8915c8acf8c0fe52f8d86296da0493acaacc7125430429a22993df,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@sh
a256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1698795995761365658,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-jt27k,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3a866d4a-c60c-4fc5-a03b-645fd5c4bcef,},Annotations:map[string]string{io.kubernetes.container.hash: ad7a1332,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67e8c850920f78d77d5c549bb3affb9e5f5c59072e76b98136c1659f9d27f1c4,PodSandboxId:f8b1d4f9937cb456fd373bb0c914d5c1f2759a49d6b979ba14ef226c4dcea28c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-mi
nikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698795972161455184,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 506fd78c-7afe-46af-90fc-c1cf59f5aa05,},Annotations:map[string]string{io.kubernetes.container.hash: 19ebd459,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2490e87b4d4d68581243a7f46b88b2c1d0d1de9d6e988f11632aaec86fcdbc07,PodSandboxId:f8b1d4f9937cb456fd373bb0c914d5c1f2759a49d6b979ba14ef226c4dcea28c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-min
ikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698795940066220566,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 506fd78c-7afe-46af-90fc-c1cf59f5aa05,},Annotations:map[string]string{io.kubernetes.container.hash: 19ebd459,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:341dc6c0352fe53f278c2b480487ff6ee7f19ee17ef26e993885ed9100959c22,PodSandboxId:fad96a9df30951510616d939710cab9038a24ddc1924e1a40545612509c971e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-prox
y@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698795932864291705,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-scpgx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ede10144-6a51-452e-b32d-eef8b938bacd,},Annotations:map[string]string{io.kubernetes.container.hash: 1f6d3478,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a3a9ee561bb8c3856a607b4083289a6110aebc5b06a4c4020ed2d33ea4d871b,PodSandboxId:f1c5cc570ba6c2b5ee87d83856d8fbbcbc69e0d0424c818f196b9314e0fc7b24,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0
feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698795929442585104,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mxmdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdce0e30-4f2a-405c-a37c-3dd5009e5544,},Annotations:map[string]string{io.kubernetes.container.hash: d13477bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda2f6093f954bdce2518bff93c5285059340e550932b5204a19d53fbd300558,PodSandboxId:eee7006a87ed7f4f37fd2bafa8196c43d7caea77672ac94d3d6f030e3beec38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,}
,Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698795905480625894,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-798361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c9063001c736534711d9be4325debc3,},Annotations:map[string]string{io.kubernetes.container.hash: b28b58e4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77816da35b1f638d6c387f32e15753dcecb0c78e21760f8d3fc41c9f251608ea,PodSandboxId:e93c93224df8add0dca58ed73b8287a41c26440f45557502815d565d5be20f8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748be
c936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698795905501500470,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-798361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3707409e65c2a3cb09c052ada1919b,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36b311d0b1b42c375e811119ae9af84b0d191d01fef770a510f4cb34de1aa09d,PodSandboxId:dfd3fe077099049390000652ae1c40bd9b0651f3e6e0ee3ef55d45e9eadf7ea6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18
abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698795904876081861,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-798361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb2fa43e5a0c399b68e0ff0e26eccf3,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd74b1e826f8a2407d8eb4a4f00121d94b4c41796a12ddcbf77e135832f18743,PodSandboxId:02f59606dd24645098be21514cb4dcdc7d3eb01f6f2ee346db20fd24244d2331,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d
2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698795904813993163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-798361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35d6b5eaa303f36e2f6b5ce833246913,},Annotations:map[string]string{io.kubernetes.container.hash: cff31595,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8f72ebb1-0932-47b8-b7e6-0f4aef878cbe name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 23:51:01 addons-798361 crio[715]: time="2023-10-31 23:51:01.890564967Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=fa086db8-6eb2-406d-8aa0-f1301214e519 name=/runtime.v1.RuntimeService/Version
	Oct 31 23:51:01 addons-798361 crio[715]: time="2023-10-31 23:51:01.890624298Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=fa086db8-6eb2-406d-8aa0-f1301214e519 name=/runtime.v1.RuntimeService/Version
	Oct 31 23:51:01 addons-798361 crio[715]: time="2023-10-31 23:51:01.891810451Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6166c89b-9d80-410a-93af-7b79e96dde80 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 23:51:01 addons-798361 crio[715]: time="2023-10-31 23:51:01.893332604Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698796261893307588,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:529245,},InodesUsed:&UInt64Value{Value:221,},},},}" file="go-grpc-middleware/chain.go:25" id=6166c89b-9d80-410a-93af-7b79e96dde80 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 23:51:01 addons-798361 crio[715]: time="2023-10-31 23:51:01.894026922Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a373ea83-c494-4803-a48b-cacc7c8f463f name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 23:51:01 addons-798361 crio[715]: time="2023-10-31 23:51:01.894080958Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a373ea83-c494-4803-a48b-cacc7c8f463f name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 23:51:01 addons-798361 crio[715]: time="2023-10-31 23:51:01.894439894Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd8ba1139203435974fd89ead36904a20100751e05245f6708c940294de3b42c,PodSandboxId:247f9b86b88101ed0c09f08a8303f4cf8e191c6d89b517cd87017d49e3adbdd6,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9f3072b59865e9203c931b3ba3e8ae52e1ca3002cff46ed204bda9d37850420d,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9f3072b59865e9203c931b3ba3e8ae52e1ca3002cff46ed204bda9d37850420d,State:CONTAINER_RUNNING,CreatedAt:1698796254761398238,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-cqnnz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 818c1d3d-8f4e-4481-a761-9c45fd02d5fa,},Annotations:map[string]string{io.kubernetes.container.hash: f218526e,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49fa61dfa90506d75b21237a878fd150683f4967c648106a965093eba089ccc4,PodSandboxId:fdf61859d51e0ce6c1dbf4d4b8cdc5e19896431617c49f6b21f2419720d7eb20,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4,State:CONTAINER_RUNNING,CreatedAt:1698796132329429740,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-94b766c-fzcp5,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: d63e52a3-c7fc-4035-b867-099157e15969,},Annot
ations:map[string]string{io.kubernetes.container.hash: a8721482,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd69bd70d2c83fab04f29ea983fa6d44a2febe0473a2451660e31d32a98e74bb,PodSandboxId:188de3bbe4f768fa0cf8a8d6d1817dbdbc211b371ec210ee152ef2286710e6a8,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1698796113247315745,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io
.kubernetes.pod.uid: 95a3b560-345b-4ce8-aecb-2b42ff0e1ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 7fd6e9bd,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c62294a20df1d6d4f98d1c8f7f7d7e88f25f74559150f54eaafccd9e1c3795a,PodSandboxId:2b050484940cfbf79bcf0a68b8174edfe698f934328647035381c5b869de5131,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1698796074225315761,Labels:map[string]string{io.kubernetes.container.name: g
cp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-qrqv8,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 4a43309c-dcdc-4822-8ca8-e266658cd278,},Annotations:map[string]string{io.kubernetes.container.hash: e700df47,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9137074aa273eabad6a9e29738ac93a9611d458d68b13bcda96856ddc398524a,PodSandboxId:e0b5705af9c0eee50cea7795e4f50cfc9ad93a1a17bd4c93b84762c82e8e84a2,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:16987960060
08018357,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-lzt55,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8d94da0e-b439-4d7d-a977-ba71eb74b3f4,},Annotations:map[string]string{io.kubernetes.container.hash: 6998c5c5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47bb7f64dd213a302756bbf8735daf27b021c2ad5a4569327aa529d520a819c0,PodSandboxId:628a699113376601a440d4df8c6935171d71e12c662889decf3a66d40646080d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b
9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1698796002053625697,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-pd6zk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4fa0ac04-112c-473f-a008-ef4c0231c0c8,},Annotations:map[string]string{io.kubernetes.container.hash: 5265ed9b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62f84de30b61b0b269c8a568ff592b54c9b580b9aef22e31c3b549846f3cc3fe,PodSandboxId:7c52710fbf8915c8acf8c0fe52f8d86296da0493acaacc7125430429a22993df,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@sh
a256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1698795995761365658,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-jt27k,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3a866d4a-c60c-4fc5-a03b-645fd5c4bcef,},Annotations:map[string]string{io.kubernetes.container.hash: ad7a1332,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67e8c850920f78d77d5c549bb3affb9e5f5c59072e76b98136c1659f9d27f1c4,PodSandboxId:f8b1d4f9937cb456fd373bb0c914d5c1f2759a49d6b979ba14ef226c4dcea28c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-mi
nikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698795972161455184,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 506fd78c-7afe-46af-90fc-c1cf59f5aa05,},Annotations:map[string]string{io.kubernetes.container.hash: 19ebd459,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2490e87b4d4d68581243a7f46b88b2c1d0d1de9d6e988f11632aaec86fcdbc07,PodSandboxId:f8b1d4f9937cb456fd373bb0c914d5c1f2759a49d6b979ba14ef226c4dcea28c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-min
ikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698795940066220566,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 506fd78c-7afe-46af-90fc-c1cf59f5aa05,},Annotations:map[string]string{io.kubernetes.container.hash: 19ebd459,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:341dc6c0352fe53f278c2b480487ff6ee7f19ee17ef26e993885ed9100959c22,PodSandboxId:fad96a9df30951510616d939710cab9038a24ddc1924e1a40545612509c971e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-prox
y@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698795932864291705,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-scpgx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ede10144-6a51-452e-b32d-eef8b938bacd,},Annotations:map[string]string{io.kubernetes.container.hash: 1f6d3478,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a3a9ee561bb8c3856a607b4083289a6110aebc5b06a4c4020ed2d33ea4d871b,PodSandboxId:f1c5cc570ba6c2b5ee87d83856d8fbbcbc69e0d0424c818f196b9314e0fc7b24,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0
feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698795929442585104,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mxmdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdce0e30-4f2a-405c-a37c-3dd5009e5544,},Annotations:map[string]string{io.kubernetes.container.hash: d13477bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda2f6093f954bdce2518bff93c5285059340e550932b5204a19d53fbd300558,PodSandboxId:eee7006a87ed7f4f37fd2bafa8196c43d7caea77672ac94d3d6f030e3beec38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,}
,Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698795905480625894,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-798361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c9063001c736534711d9be4325debc3,},Annotations:map[string]string{io.kubernetes.container.hash: b28b58e4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77816da35b1f638d6c387f32e15753dcecb0c78e21760f8d3fc41c9f251608ea,PodSandboxId:e93c93224df8add0dca58ed73b8287a41c26440f45557502815d565d5be20f8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748be
c936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698795905501500470,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-798361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3707409e65c2a3cb09c052ada1919b,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36b311d0b1b42c375e811119ae9af84b0d191d01fef770a510f4cb34de1aa09d,PodSandboxId:dfd3fe077099049390000652ae1c40bd9b0651f3e6e0ee3ef55d45e9eadf7ea6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18
abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698795904876081861,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-798361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb2fa43e5a0c399b68e0ff0e26eccf3,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd74b1e826f8a2407d8eb4a4f00121d94b4c41796a12ddcbf77e135832f18743,PodSandboxId:02f59606dd24645098be21514cb4dcdc7d3eb01f6f2ee346db20fd24244d2331,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d
2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698795904813993163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-798361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35d6b5eaa303f36e2f6b5ce833246913,},Annotations:map[string]string{io.kubernetes.container.hash: cff31595,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a373ea83-c494-4803-a48b-cacc7c8f463f name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 23:51:01 addons-798361 crio[715]: time="2023-10-31 23:51:01.929351505Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3b16e74e-a1e4-4327-9ce9-90a722a45534 name=/runtime.v1.RuntimeService/Version
	Oct 31 23:51:01 addons-798361 crio[715]: time="2023-10-31 23:51:01.929417522Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3b16e74e-a1e4-4327-9ce9-90a722a45534 name=/runtime.v1.RuntimeService/Version
	Oct 31 23:51:01 addons-798361 crio[715]: time="2023-10-31 23:51:01.931381741Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=dbaeaa03-5fb6-4658-b43e-9f696bb8207f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 23:51:01 addons-798361 crio[715]: time="2023-10-31 23:51:01.932921883Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698796261932843461,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:529245,},InodesUsed:&UInt64Value{Value:221,},},},}" file="go-grpc-middleware/chain.go:25" id=dbaeaa03-5fb6-4658-b43e-9f696bb8207f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 23:51:01 addons-798361 crio[715]: time="2023-10-31 23:51:01.933623570Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d2a601bd-08f5-46e7-9120-73a2a1fed3f7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 23:51:01 addons-798361 crio[715]: time="2023-10-31 23:51:01.933727856Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d2a601bd-08f5-46e7-9120-73a2a1fed3f7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 23:51:01 addons-798361 crio[715]: time="2023-10-31 23:51:01.934482593Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd8ba1139203435974fd89ead36904a20100751e05245f6708c940294de3b42c,PodSandboxId:247f9b86b88101ed0c09f08a8303f4cf8e191c6d89b517cd87017d49e3adbdd6,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9f3072b59865e9203c931b3ba3e8ae52e1ca3002cff46ed204bda9d37850420d,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9f3072b59865e9203c931b3ba3e8ae52e1ca3002cff46ed204bda9d37850420d,State:CONTAINER_RUNNING,CreatedAt:1698796254761398238,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-cqnnz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 818c1d3d-8f4e-4481-a761-9c45fd02d5fa,},Annotations:map[string]string{io.kubernetes.container.hash: f218526e,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49fa61dfa90506d75b21237a878fd150683f4967c648106a965093eba089ccc4,PodSandboxId:fdf61859d51e0ce6c1dbf4d4b8cdc5e19896431617c49f6b21f2419720d7eb20,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4,State:CONTAINER_RUNNING,CreatedAt:1698796132329429740,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-94b766c-fzcp5,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: d63e52a3-c7fc-4035-b867-099157e15969,},Annot
ations:map[string]string{io.kubernetes.container.hash: a8721482,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd69bd70d2c83fab04f29ea983fa6d44a2febe0473a2451660e31d32a98e74bb,PodSandboxId:188de3bbe4f768fa0cf8a8d6d1817dbdbc211b371ec210ee152ef2286710e6a8,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1698796113247315745,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io
.kubernetes.pod.uid: 95a3b560-345b-4ce8-aecb-2b42ff0e1ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 7fd6e9bd,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c62294a20df1d6d4f98d1c8f7f7d7e88f25f74559150f54eaafccd9e1c3795a,PodSandboxId:2b050484940cfbf79bcf0a68b8174edfe698f934328647035381c5b869de5131,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1698796074225315761,Labels:map[string]string{io.kubernetes.container.name: g
cp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-qrqv8,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 4a43309c-dcdc-4822-8ca8-e266658cd278,},Annotations:map[string]string{io.kubernetes.container.hash: e700df47,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9137074aa273eabad6a9e29738ac93a9611d458d68b13bcda96856ddc398524a,PodSandboxId:e0b5705af9c0eee50cea7795e4f50cfc9ad93a1a17bd4c93b84762c82e8e84a2,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:16987960060
08018357,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-lzt55,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8d94da0e-b439-4d7d-a977-ba71eb74b3f4,},Annotations:map[string]string{io.kubernetes.container.hash: 6998c5c5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47bb7f64dd213a302756bbf8735daf27b021c2ad5a4569327aa529d520a819c0,PodSandboxId:628a699113376601a440d4df8c6935171d71e12c662889decf3a66d40646080d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b
9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1698796002053625697,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-pd6zk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4fa0ac04-112c-473f-a008-ef4c0231c0c8,},Annotations:map[string]string{io.kubernetes.container.hash: 5265ed9b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62f84de30b61b0b269c8a568ff592b54c9b580b9aef22e31c3b549846f3cc3fe,PodSandboxId:7c52710fbf8915c8acf8c0fe52f8d86296da0493acaacc7125430429a22993df,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@sh
a256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1698795995761365658,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-jt27k,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3a866d4a-c60c-4fc5-a03b-645fd5c4bcef,},Annotations:map[string]string{io.kubernetes.container.hash: ad7a1332,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67e8c850920f78d77d5c549bb3affb9e5f5c59072e76b98136c1659f9d27f1c4,PodSandboxId:f8b1d4f9937cb456fd373bb0c914d5c1f2759a49d6b979ba14ef226c4dcea28c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-mi
nikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698795972161455184,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 506fd78c-7afe-46af-90fc-c1cf59f5aa05,},Annotations:map[string]string{io.kubernetes.container.hash: 19ebd459,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2490e87b4d4d68581243a7f46b88b2c1d0d1de9d6e988f11632aaec86fcdbc07,PodSandboxId:f8b1d4f9937cb456fd373bb0c914d5c1f2759a49d6b979ba14ef226c4dcea28c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-min
ikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698795940066220566,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 506fd78c-7afe-46af-90fc-c1cf59f5aa05,},Annotations:map[string]string{io.kubernetes.container.hash: 19ebd459,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:341dc6c0352fe53f278c2b480487ff6ee7f19ee17ef26e993885ed9100959c22,PodSandboxId:fad96a9df30951510616d939710cab9038a24ddc1924e1a40545612509c971e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-prox
y@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698795932864291705,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-scpgx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ede10144-6a51-452e-b32d-eef8b938bacd,},Annotations:map[string]string{io.kubernetes.container.hash: 1f6d3478,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a3a9ee561bb8c3856a607b4083289a6110aebc5b06a4c4020ed2d33ea4d871b,PodSandboxId:f1c5cc570ba6c2b5ee87d83856d8fbbcbc69e0d0424c818f196b9314e0fc7b24,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0
feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698795929442585104,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mxmdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdce0e30-4f2a-405c-a37c-3dd5009e5544,},Annotations:map[string]string{io.kubernetes.container.hash: d13477bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda2f6093f954bdce2518bff93c5285059340e550932b5204a19d53fbd300558,PodSandboxId:eee7006a87ed7f4f37fd2bafa8196c43d7caea77672ac94d3d6f030e3beec38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,}
,Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698795905480625894,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-798361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c9063001c736534711d9be4325debc3,},Annotations:map[string]string{io.kubernetes.container.hash: b28b58e4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77816da35b1f638d6c387f32e15753dcecb0c78e21760f8d3fc41c9f251608ea,PodSandboxId:e93c93224df8add0dca58ed73b8287a41c26440f45557502815d565d5be20f8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748be
c936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698795905501500470,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-798361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3707409e65c2a3cb09c052ada1919b,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36b311d0b1b42c375e811119ae9af84b0d191d01fef770a510f4cb34de1aa09d,PodSandboxId:dfd3fe077099049390000652ae1c40bd9b0651f3e6e0ee3ef55d45e9eadf7ea6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18
abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698795904876081861,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-798361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb2fa43e5a0c399b68e0ff0e26eccf3,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd74b1e826f8a2407d8eb4a4f00121d94b4c41796a12ddcbf77e135832f18743,PodSandboxId:02f59606dd24645098be21514cb4dcdc7d3eb01f6f2ee346db20fd24244d2331,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d
2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698795904813993163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-798361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35d6b5eaa303f36e2f6b5ce833246913,},Annotations:map[string]string{io.kubernetes.container.hash: cff31595,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d2a601bd-08f5-46e7-9120-73a2a1fed3f7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 23:51:01 addons-798361 crio[715]: time="2023-10-31 23:51:01.976625375Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=f0ae1345-7620-44e2-a383-20230cf5d4c6 name=/runtime.v1.RuntimeService/Version
	Oct 31 23:51:01 addons-798361 crio[715]: time="2023-10-31 23:51:01.976688297Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f0ae1345-7620-44e2-a383-20230cf5d4c6 name=/runtime.v1.RuntimeService/Version
	Oct 31 23:51:01 addons-798361 crio[715]: time="2023-10-31 23:51:01.979142812Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e713160a-1f62-49b1-84d6-34c0d9c189cf name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 23:51:01 addons-798361 crio[715]: time="2023-10-31 23:51:01.980360114Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698796261980339730,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:529245,},InodesUsed:&UInt64Value{Value:221,},},},}" file="go-grpc-middleware/chain.go:25" id=e713160a-1f62-49b1-84d6-34c0d9c189cf name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 23:51:01 addons-798361 crio[715]: time="2023-10-31 23:51:01.981311759Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9324612f-7c7d-4a1d-8bcc-3ac066088a80 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 23:51:01 addons-798361 crio[715]: time="2023-10-31 23:51:01.981384618Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9324612f-7c7d-4a1d-8bcc-3ac066088a80 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 23:51:01 addons-798361 crio[715]: time="2023-10-31 23:51:01.981728806Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd8ba1139203435974fd89ead36904a20100751e05245f6708c940294de3b42c,PodSandboxId:247f9b86b88101ed0c09f08a8303f4cf8e191c6d89b517cd87017d49e3adbdd6,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9f3072b59865e9203c931b3ba3e8ae52e1ca3002cff46ed204bda9d37850420d,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9f3072b59865e9203c931b3ba3e8ae52e1ca3002cff46ed204bda9d37850420d,State:CONTAINER_RUNNING,CreatedAt:1698796254761398238,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-cqnnz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 818c1d3d-8f4e-4481-a761-9c45fd02d5fa,},Annotations:map[string]string{io.kubernetes.container.hash: f218526e,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49fa61dfa90506d75b21237a878fd150683f4967c648106a965093eba089ccc4,PodSandboxId:fdf61859d51e0ce6c1dbf4d4b8cdc5e19896431617c49f6b21f2419720d7eb20,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4,State:CONTAINER_RUNNING,CreatedAt:1698796132329429740,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-94b766c-fzcp5,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: d63e52a3-c7fc-4035-b867-099157e15969,},Annot
ations:map[string]string{io.kubernetes.container.hash: a8721482,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd69bd70d2c83fab04f29ea983fa6d44a2febe0473a2451660e31d32a98e74bb,PodSandboxId:188de3bbe4f768fa0cf8a8d6d1817dbdbc211b371ec210ee152ef2286710e6a8,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1698796113247315745,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io
.kubernetes.pod.uid: 95a3b560-345b-4ce8-aecb-2b42ff0e1ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 7fd6e9bd,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c62294a20df1d6d4f98d1c8f7f7d7e88f25f74559150f54eaafccd9e1c3795a,PodSandboxId:2b050484940cfbf79bcf0a68b8174edfe698f934328647035381c5b869de5131,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1698796074225315761,Labels:map[string]string{io.kubernetes.container.name: g
cp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-qrqv8,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 4a43309c-dcdc-4822-8ca8-e266658cd278,},Annotations:map[string]string{io.kubernetes.container.hash: e700df47,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9137074aa273eabad6a9e29738ac93a9611d458d68b13bcda96856ddc398524a,PodSandboxId:e0b5705af9c0eee50cea7795e4f50cfc9ad93a1a17bd4c93b84762c82e8e84a2,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:16987960060
08018357,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-lzt55,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8d94da0e-b439-4d7d-a977-ba71eb74b3f4,},Annotations:map[string]string{io.kubernetes.container.hash: 6998c5c5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47bb7f64dd213a302756bbf8735daf27b021c2ad5a4569327aa529d520a819c0,PodSandboxId:628a699113376601a440d4df8c6935171d71e12c662889decf3a66d40646080d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b
9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1698796002053625697,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-pd6zk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4fa0ac04-112c-473f-a008-ef4c0231c0c8,},Annotations:map[string]string{io.kubernetes.container.hash: 5265ed9b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62f84de30b61b0b269c8a568ff592b54c9b580b9aef22e31c3b549846f3cc3fe,PodSandboxId:7c52710fbf8915c8acf8c0fe52f8d86296da0493acaacc7125430429a22993df,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@sh
a256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1698795995761365658,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-jt27k,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3a866d4a-c60c-4fc5-a03b-645fd5c4bcef,},Annotations:map[string]string{io.kubernetes.container.hash: ad7a1332,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67e8c850920f78d77d5c549bb3affb9e5f5c59072e76b98136c1659f9d27f1c4,PodSandboxId:f8b1d4f9937cb456fd373bb0c914d5c1f2759a49d6b979ba14ef226c4dcea28c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-mi
nikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698795972161455184,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 506fd78c-7afe-46af-90fc-c1cf59f5aa05,},Annotations:map[string]string{io.kubernetes.container.hash: 19ebd459,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2490e87b4d4d68581243a7f46b88b2c1d0d1de9d6e988f11632aaec86fcdbc07,PodSandboxId:f8b1d4f9937cb456fd373bb0c914d5c1f2759a49d6b979ba14ef226c4dcea28c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-min
ikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698795940066220566,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 506fd78c-7afe-46af-90fc-c1cf59f5aa05,},Annotations:map[string]string{io.kubernetes.container.hash: 19ebd459,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:341dc6c0352fe53f278c2b480487ff6ee7f19ee17ef26e993885ed9100959c22,PodSandboxId:fad96a9df30951510616d939710cab9038a24ddc1924e1a40545612509c971e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-prox
y@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698795932864291705,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-scpgx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ede10144-6a51-452e-b32d-eef8b938bacd,},Annotations:map[string]string{io.kubernetes.container.hash: 1f6d3478,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a3a9ee561bb8c3856a607b4083289a6110aebc5b06a4c4020ed2d33ea4d871b,PodSandboxId:f1c5cc570ba6c2b5ee87d83856d8fbbcbc69e0d0424c818f196b9314e0fc7b24,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0
feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698795929442585104,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mxmdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdce0e30-4f2a-405c-a37c-3dd5009e5544,},Annotations:map[string]string{io.kubernetes.container.hash: d13477bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda2f6093f954bdce2518bff93c5285059340e550932b5204a19d53fbd300558,PodSandboxId:eee7006a87ed7f4f37fd2bafa8196c43d7caea77672ac94d3d6f030e3beec38f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,}
,Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698795905480625894,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-798361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c9063001c736534711d9be4325debc3,},Annotations:map[string]string{io.kubernetes.container.hash: b28b58e4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77816da35b1f638d6c387f32e15753dcecb0c78e21760f8d3fc41c9f251608ea,PodSandboxId:e93c93224df8add0dca58ed73b8287a41c26440f45557502815d565d5be20f8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748be
c936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698795905501500470,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-798361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3707409e65c2a3cb09c052ada1919b,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36b311d0b1b42c375e811119ae9af84b0d191d01fef770a510f4cb34de1aa09d,PodSandboxId:dfd3fe077099049390000652ae1c40bd9b0651f3e6e0ee3ef55d45e9eadf7ea6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18
abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698795904876081861,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-798361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb2fa43e5a0c399b68e0ff0e26eccf3,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd74b1e826f8a2407d8eb4a4f00121d94b4c41796a12ddcbf77e135832f18743,PodSandboxId:02f59606dd24645098be21514cb4dcdc7d3eb01f6f2ee346db20fd24244d2331,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d
2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698795904813993163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-798361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35d6b5eaa303f36e2f6b5ce833246913,},Annotations:map[string]string{io.kubernetes.container.hash: cff31595,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9324612f-7c7d-4a1d-8bcc-3ac066088a80 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bd8ba11392034       gcr.io/google-samples/hello-app@sha256:9f3072b59865e9203c931b3ba3e8ae52e1ca3002cff46ed204bda9d37850420d                      7 seconds ago       Running             hello-world-app           0                   247f9b86b8810       hello-world-app-5d77478584-cqnnz
	49fa61dfa9050       ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4                        2 minutes ago       Running             headlamp                  0                   fdf61859d51e0       headlamp-94b766c-fzcp5
	cd69bd70d2c83       docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d                              2 minutes ago       Running             nginx                     0                   188de3bbe4f76       nginx
	4c62294a20df1       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 3 minutes ago       Running             gcp-auth                  0                   2b050484940cf       gcp-auth-d4c87556c-qrqv8
	9137074aa273e       1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb                                                             4 minutes ago       Exited              patch                     2                   e0b5705af9c0e       ingress-nginx-admission-patch-lzt55
	47bb7f64dd213       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   4 minutes ago       Exited              create                    0                   628a699113376       ingress-nginx-admission-create-pd6zk
	62f84de30b61b       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago       Running             local-path-provisioner    0                   7c52710fbf891       local-path-provisioner-78b46b4d5c-jt27k
	67e8c850920f7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       1                   f8b1d4f9937cb       storage-provisioner
	2490e87b4d4d6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Exited              storage-provisioner       0                   f8b1d4f9937cb       storage-provisioner
	341dc6c0352fe       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                                             5 minutes ago       Running             kube-proxy                0                   fad96a9df3095       kube-proxy-scpgx
	1a3a9ee561bb8       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             5 minutes ago       Running             coredns                   0                   f1c5cc570ba6c       coredns-5dd5756b68-mxmdx
	77816da35b1f6       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                                             5 minutes ago       Running             kube-scheduler            0                   e93c93224df8a       kube-scheduler-addons-798361
	dda2f6093f954       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             5 minutes ago       Running             etcd                      0                   eee7006a87ed7       etcd-addons-798361
	36b311d0b1b42       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                                             5 minutes ago       Running             kube-controller-manager   0                   dfd3fe0770990       kube-controller-manager-addons-798361
	cd74b1e826f8a       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                                             5 minutes ago       Running             kube-apiserver            0                   02f59606dd246       kube-apiserver-addons-798361
	
	* 
	* ==> coredns [1a3a9ee561bb8c3856a607b4083289a6110aebc5b06a4c4020ed2d33ea4d871b] <==
	* [INFO] 10.244.0.4:57910 - 26148 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000042464s
	[INFO] 10.244.0.4:46695 - 34217 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00004782s
	[INFO] 10.244.0.4:46695 - 7343 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000076596s
	[INFO] 10.244.0.4:43477 - 63044 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00002962s
	[INFO] 10.244.0.4:43477 - 61369 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000127404s
	[INFO] 10.244.0.4:48782 - 59639 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000042582s
	[INFO] 10.244.0.4:48782 - 10228 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000043242s
	[INFO] 10.244.0.4:55244 - 29157 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000046016s
	[INFO] 10.244.0.4:55244 - 2528 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000033122s
	[INFO] 10.244.0.4:57342 - 2999 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000035044s
	[INFO] 10.244.0.4:57342 - 32433 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000031555s
	[INFO] 10.244.0.4:43583 - 13033 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000028265s
	[INFO] 10.244.0.4:43583 - 19695 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00003165s
	[INFO] 10.244.0.4:41916 - 32623 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000038098s
	[INFO] 10.244.0.4:41916 - 50017 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000174172s
	[INFO] 10.244.0.19:56157 - 23927 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000488339s
	[INFO] 10.244.0.19:45526 - 5158 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000104042s
	[INFO] 10.244.0.19:59679 - 16636 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000108392s
	[INFO] 10.244.0.19:47389 - 43135 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000083776s
	[INFO] 10.244.0.19:42671 - 23779 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000094075s
	[INFO] 10.244.0.19:39849 - 12661 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000294852s
	[INFO] 10.244.0.19:49805 - 54807 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000673353s
	[INFO] 10.244.0.19:54994 - 54741 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.000696846s
	[INFO] 10.244.0.21:48824 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000325948s
	[INFO] 10.244.0.21:45678 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000128154s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-798361
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-798361
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9
	                    minikube.k8s.io/name=addons-798361
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_31T23_45_13_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-798361
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 Oct 2023 23:45:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-798361
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 Oct 2023 23:51:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 Oct 2023 23:49:18 +0000   Tue, 31 Oct 2023 23:45:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 Oct 2023 23:49:18 +0000   Tue, 31 Oct 2023 23:45:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 Oct 2023 23:49:18 +0000   Tue, 31 Oct 2023 23:45:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 Oct 2023 23:49:18 +0000   Tue, 31 Oct 2023 23:45:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.214
	  Hostname:    addons-798361
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 d6b04d782d484e12a84c16b958892be7
	  System UUID:                d6b04d78-2d48-4e12-a84c-16b958892be7
	  Boot ID:                    ee810826-8d66-44e5-85ca-8e8014d878c0
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-cqnnz           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m40s
	  gcp-auth                    gcp-auth-d4c87556c-qrqv8                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m26s
	  headlamp                    headlamp-94b766c-fzcp5                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m18s
	  kube-system                 coredns-5dd5756b68-mxmdx                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m37s
	  kube-system                 etcd-addons-798361                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m49s
	  kube-system                 kube-apiserver-addons-798361               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m49s
	  kube-system                 kube-controller-manager-addons-798361      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m49s
	  kube-system                 kube-proxy-scpgx                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m38s
	  kube-system                 kube-scheduler-addons-798361               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m49s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m30s
	  local-path-storage          local-path-provisioner-78b46b4d5c-jt27k    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m22s                  kube-proxy       
	  Normal  Starting                 5m59s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m59s (x8 over 5m59s)  kubelet          Node addons-798361 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m59s (x8 over 5m59s)  kubelet          Node addons-798361 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m59s (x7 over 5m59s)  kubelet          Node addons-798361 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m50s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m49s                  kubelet          Node addons-798361 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m49s                  kubelet          Node addons-798361 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m49s                  kubelet          Node addons-798361 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m49s                  kubelet          Node addons-798361 status is now: NodeReady
	  Normal  RegisteredNode           5m38s                  node-controller  Node addons-798361 event: Registered Node addons-798361 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.127823] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.004773] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.080959] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.100702] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.152042] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.099814] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.204308] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[Oct31 23:45] systemd-fstab-generator[908]: Ignoring "noauto" for root device
	[  +9.267427] systemd-fstab-generator[1246]: Ignoring "noauto" for root device
	[ +19.156082] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.221298] kauditd_printk_skb: 47 callbacks suppressed
	[Oct31 23:46] kauditd_printk_skb: 24 callbacks suppressed
	[ +18.258228] kauditd_printk_skb: 16 callbacks suppressed
	[ +22.229766] kauditd_printk_skb: 5 callbacks suppressed
	[Oct31 23:47] kauditd_printk_skb: 1 callbacks suppressed
	[Oct31 23:48] kauditd_printk_skb: 7 callbacks suppressed
	[ +11.232885] kauditd_printk_skb: 20 callbacks suppressed
	[ +11.618326] kauditd_printk_skb: 5 callbacks suppressed
	[  +6.369658] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.480073] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.994370] kauditd_printk_skb: 12 callbacks suppressed
	[Oct31 23:50] kauditd_printk_skb: 7 callbacks suppressed
	
	* 
	* ==> etcd [dda2f6093f954bdce2518bff93c5285059340e550932b5204a19d53fbd300558] <==
	* {"level":"warn","ts":"2023-10-31T23:48:21.188175Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"400.599579ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2023-10-31T23:48:21.188239Z","caller":"traceutil/trace.go:171","msg":"trace[1190679059] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1385; }","duration":"400.700614ms","start":"2023-10-31T23:48:20.787529Z","end":"2023-10-31T23:48:21.188229Z","steps":["trace[1190679059] 'agreement among raft nodes before linearized reading'  (duration: 400.558052ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-31T23:48:21.188282Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-31T23:48:20.787516Z","time spent":"400.759918ms","remote":"127.0.0.1:36390","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":1,"response size":522,"request content":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" "}
	{"level":"info","ts":"2023-10-31T23:48:21.188493Z","caller":"traceutil/trace.go:171","msg":"trace[39238718] transaction","detail":"{read_only:false; response_revision:1385; number_of_response:1; }","duration":"484.805927ms","start":"2023-10-31T23:48:20.703678Z","end":"2023-10-31T23:48:21.188484Z","steps":["trace[39238718] 'process raft request'  (duration: 484.059578ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-31T23:48:21.188599Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.484815ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:8531"}
	{"level":"info","ts":"2023-10-31T23:48:21.188731Z","caller":"traceutil/trace.go:171","msg":"trace[17696187] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:1385; }","duration":"112.617022ms","start":"2023-10-31T23:48:21.076105Z","end":"2023-10-31T23:48:21.188722Z","steps":["trace[17696187] 'agreement among raft nodes before linearized reading'  (duration: 112.462434ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-31T23:48:21.188912Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"224.892207ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-31T23:48:21.188979Z","caller":"traceutil/trace.go:171","msg":"trace[72447199] range","detail":"{range_begin:/registry/podtemplates/; range_end:/registry/podtemplates0; response_count:0; response_revision:1385; }","duration":"225.012851ms","start":"2023-10-31T23:48:20.963959Z","end":"2023-10-31T23:48:21.188972Z","steps":["trace[72447199] 'agreement among raft nodes before linearized reading'  (duration: 224.877409ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-31T23:48:21.189106Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"288.043379ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2023-10-31T23:48:21.189144Z","caller":"traceutil/trace.go:171","msg":"trace[1957705412] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1385; }","duration":"288.08047ms","start":"2023-10-31T23:48:20.901056Z","end":"2023-10-31T23:48:21.189137Z","steps":["trace[1957705412] 'agreement among raft nodes before linearized reading'  (duration: 288.021114ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-31T23:48:21.188616Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-31T23:48:20.703662Z","time spent":"484.891713ms","remote":"127.0.0.1:36372","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4145,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/nvidia-device-plugin-daemonset-lvpgt\" mod_revision:1383 > success:<request_put:<key:\"/registry/pods/kube-system/nvidia-device-plugin-daemonset-lvpgt\" value_size:4074 >> failure:<request_range:<key:\"/registry/pods/kube-system/nvidia-device-plugin-daemonset-lvpgt\" > >"}
	{"level":"warn","ts":"2023-10-31T23:48:21.189157Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"324.994627ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-10-31T23:48:21.189368Z","caller":"traceutil/trace.go:171","msg":"trace[184042823] range","detail":"{range_begin:/registry/jobs/; range_end:/registry/jobs0; response_count:0; response_revision:1385; }","duration":"325.205344ms","start":"2023-10-31T23:48:20.864153Z","end":"2023-10-31T23:48:21.189358Z","steps":["trace[184042823] 'agreement among raft nodes before linearized reading'  (duration: 324.975802ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-31T23:48:21.189394Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-31T23:48:20.864112Z","time spent":"325.273316ms","remote":"127.0.0.1:36384","response type":"/etcdserverpb.KV/Range","request count":0,"request size":36,"response count":2,"response size":30,"request content":"key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" count_only:true "}
	{"level":"warn","ts":"2023-10-31T23:48:21.188199Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"333.986773ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-31T23:48:21.189524Z","caller":"traceutil/trace.go:171","msg":"trace[1459278558] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1385; }","duration":"335.321467ms","start":"2023-10-31T23:48:20.854194Z","end":"2023-10-31T23:48:21.189516Z","steps":["trace[1459278558] 'agreement among raft nodes before linearized reading'  (duration: 333.964497ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-31T23:48:21.189552Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-31T23:48:20.854181Z","time spent":"335.363647ms","remote":"127.0.0.1:36322","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2023-10-31T23:48:49.00826Z","caller":"traceutil/trace.go:171","msg":"trace[1825644506] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1643; }","duration":"100.644397ms","start":"2023-10-31T23:48:48.907403Z","end":"2023-10-31T23:48:49.008047Z","steps":["trace[1825644506] 'process raft request'  (duration: 100.51285ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-31T23:48:52.217703Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"332.165062ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6697850487605567204 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/csi-hostpath-attacher-0\" mod_revision:1713 > success:<request_delete_range:<key:\"/registry/pods/kube-system/csi-hostpath-attacher-0\" > > failure:<request_range:<key:\"/registry/pods/kube-system/csi-hostpath-attacher-0\" > >>","response":"size:18"}
	{"level":"info","ts":"2023-10-31T23:48:52.21782Z","caller":"traceutil/trace.go:171","msg":"trace[368898683] linearizableReadLoop","detail":"{readStateIndex:1787; appliedIndex:1786; }","duration":"383.967567ms","start":"2023-10-31T23:48:51.833837Z","end":"2023-10-31T23:48:52.217805Z","steps":["trace[368898683] 'read index received'  (duration: 51.29168ms)","trace[368898683] 'applied index is now lower than readState.Index'  (duration: 332.674899ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-31T23:48:52.217969Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"384.134718ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/kube-system/external-provisioner-cfg\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-31T23:48:52.217997Z","caller":"traceutil/trace.go:171","msg":"trace[821488226] range","detail":"{range_begin:/registry/roles/kube-system/external-provisioner-cfg; range_end:; response_count:0; response_revision:1714; }","duration":"384.174424ms","start":"2023-10-31T23:48:51.833813Z","end":"2023-10-31T23:48:52.217987Z","steps":["trace[821488226] 'agreement among raft nodes before linearized reading'  (duration: 384.033661ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-31T23:48:52.218016Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-31T23:48:51.833799Z","time spent":"384.212425ms","remote":"127.0.0.1:36404","response type":"/etcdserverpb.KV/Range","request count":0,"request size":54,"response count":0,"response size":28,"request content":"key:\"/registry/roles/kube-system/external-provisioner-cfg\" "}
	{"level":"info","ts":"2023-10-31T23:48:52.218258Z","caller":"traceutil/trace.go:171","msg":"trace[602510010] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1714; }","duration":"495.080229ms","start":"2023-10-31T23:48:51.72317Z","end":"2023-10-31T23:48:52.21825Z","steps":["trace[602510010] 'process raft request'  (duration: 161.95155ms)","trace[602510010] 'compare'  (duration: 330.5293ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-31T23:48:52.218339Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-31T23:48:51.723152Z","time spent":"495.154139ms","remote":"127.0.0.1:36372","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":54,"response count":0,"response size":41,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/csi-hostpath-attacher-0\" mod_revision:1713 > success:<request_delete_range:<key:\"/registry/pods/kube-system/csi-hostpath-attacher-0\" > > failure:<request_range:<key:\"/registry/pods/kube-system/csi-hostpath-attacher-0\" > >"}
	
	* 
	* ==> gcp-auth [4c62294a20df1d6d4f98d1c8f7f7d7e88f25f74559150f54eaafccd9e1c3795a] <==
	* 2023/10/31 23:47:54 GCP Auth Webhook started!
	2023/10/31 23:48:12 Ready to marshal response ...
	2023/10/31 23:48:12 Ready to write response ...
	2023/10/31 23:48:13 Ready to marshal response ...
	2023/10/31 23:48:13 Ready to write response ...
	2023/10/31 23:48:19 Ready to marshal response ...
	2023/10/31 23:48:19 Ready to write response ...
	2023/10/31 23:48:20 Ready to marshal response ...
	2023/10/31 23:48:20 Ready to write response ...
	2023/10/31 23:48:22 Ready to marshal response ...
	2023/10/31 23:48:22 Ready to write response ...
	2023/10/31 23:48:24 Ready to marshal response ...
	2023/10/31 23:48:24 Ready to write response ...
	2023/10/31 23:48:35 Ready to marshal response ...
	2023/10/31 23:48:35 Ready to write response ...
	2023/10/31 23:48:42 Ready to marshal response ...
	2023/10/31 23:48:42 Ready to write response ...
	2023/10/31 23:48:44 Ready to marshal response ...
	2023/10/31 23:48:44 Ready to write response ...
	2023/10/31 23:48:44 Ready to marshal response ...
	2023/10/31 23:48:44 Ready to write response ...
	2023/10/31 23:48:44 Ready to marshal response ...
	2023/10/31 23:48:44 Ready to write response ...
	2023/10/31 23:50:51 Ready to marshal response ...
	2023/10/31 23:50:51 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  23:51:02 up 6 min,  0 users,  load average: 0.46, 1.39, 0.82
	Linux addons-798361 5.10.57 #1 SMP Tue Oct 31 22:14:31 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [cd74b1e826f8a2407d8eb4a4f00121d94b4c41796a12ddcbf77e135832f18743] <==
	* I1031 23:48:22.529513       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1031 23:48:22.830281       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.154.231"}
	I1031 23:48:31.206656       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1031 23:48:44.232365       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.100.162.155"}
	I1031 23:48:52.219222       1 trace.go:236] Trace[924470491]: "Delete" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:371a2fef-0393-4159-bbd6-03a0948d249f,client:192.168.39.214,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/csi-hostpath-attacher-0,user-agent:kubelet/v1.28.3 (linux/amd64) kubernetes/a8a1abc,verb:DELETE (31-Oct-2023 23:48:51.683) (total time: 535ms):
	Trace[924470491]: ---"Object deleted from database" 497ms (23:48:52.218)
	Trace[924470491]: [535.809198ms] [535.809198ms] END
	I1031 23:48:56.092674       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1031 23:48:56.092751       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1031 23:48:56.109582       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1031 23:48:56.109660       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1031 23:48:56.121787       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1031 23:48:56.122089       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1031 23:48:56.139272       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1031 23:48:56.139351       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1031 23:48:56.165185       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1031 23:48:56.165570       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1031 23:48:56.180998       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1031 23:48:56.181100       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1031 23:48:56.181548       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1031 23:48:56.181602       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1031 23:48:57.153065       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1031 23:48:57.182554       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1031 23:48:57.203798       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1031 23:50:51.839996       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.111.106"}
	
	* 
	* ==> kube-controller-manager [36b311d0b1b42c375e811119ae9af84b0d191d01fef770a510f4cb34de1aa09d] <==
	* W1031 23:49:43.529012       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1031 23:49:43.529054       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1031 23:49:57.851080       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1031 23:49:57.851185       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1031 23:50:11.562584       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1031 23:50:11.562687       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1031 23:50:15.294442       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1031 23:50:15.294470       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1031 23:50:41.021363       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1031 23:50:41.021565       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1031 23:50:43.371844       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1031 23:50:43.371968       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1031 23:50:45.845148       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1031 23:50:45.845191       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1031 23:50:51.463361       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1031 23:50:51.496699       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-cqnnz"
	I1031 23:50:51.515073       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="53.43296ms"
	I1031 23:50:51.547185       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="32.011302ms"
	I1031 23:50:51.547317       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="37.97µs"
	I1031 23:50:51.547409       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="48.369µs"
	I1031 23:50:54.013348       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1031 23:50:54.021308       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="7.123µs"
	I1031 23:50:54.030270       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1031 23:50:55.213810       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="10.211158ms"
	I1031 23:50:55.214276       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="227.055µs"
	
	* 
	* ==> kube-proxy [341dc6c0352fe53f278c2b480487ff6ee7f19ee17ef26e993885ed9100959c22] <==
	* I1031 23:45:39.346493       1 server_others.go:69] "Using iptables proxy"
	I1031 23:45:39.440473       1 node.go:141] Successfully retrieved node IP: 192.168.39.214
	I1031 23:45:40.128063       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1031 23:45:40.128136       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1031 23:45:40.165050       1 server_others.go:152] "Using iptables Proxier"
	I1031 23:45:40.166192       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1031 23:45:40.169255       1 server.go:846] "Version info" version="v1.28.3"
	I1031 23:45:40.171452       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1031 23:45:40.172440       1 config.go:188] "Starting service config controller"
	I1031 23:45:40.172491       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1031 23:45:40.172525       1 config.go:97] "Starting endpoint slice config controller"
	I1031 23:45:40.172541       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1031 23:45:40.174792       1 config.go:315] "Starting node config controller"
	I1031 23:45:40.174833       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1031 23:45:40.283598       1 shared_informer.go:318] Caches are synced for node config
	I1031 23:45:40.283721       1 shared_informer.go:318] Caches are synced for service config
	I1031 23:45:40.283759       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [77816da35b1f638d6c387f32e15753dcecb0c78e21760f8d3fc41c9f251608ea] <==
	* W1031 23:45:09.439608       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1031 23:45:09.439695       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1031 23:45:10.247391       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1031 23:45:10.247610       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1031 23:45:10.253297       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1031 23:45:10.253431       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1031 23:45:10.428248       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1031 23:45:10.428298       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1031 23:45:10.510468       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1031 23:45:10.510560       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1031 23:45:10.525475       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1031 23:45:10.525567       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1031 23:45:10.552787       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1031 23:45:10.552967       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1031 23:45:10.635068       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1031 23:45:10.635169       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1031 23:45:10.635457       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1031 23:45:10.635566       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1031 23:45:10.663675       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1031 23:45:10.663792       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1031 23:45:10.668830       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1031 23:45:10.668992       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1031 23:45:10.954593       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1031 23:45:10.954698       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1031 23:45:13.912553       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-10-31 23:44:40 UTC, ends at Tue 2023-10-31 23:51:02 UTC. --
	Oct 31 23:50:51 addons-798361 kubelet[1253]: I1031 23:50:51.510764    1253 memory_manager.go:346] "RemoveStaleState removing state" podUID="014abb9c-39d6-4033-af44-73919d89490d" containerName="helper-pod"
	Oct 31 23:50:51 addons-798361 kubelet[1253]: I1031 23:50:51.510772    1253 memory_manager.go:346] "RemoveStaleState removing state" podUID="258033be-2ce4-4c7d-8b59-2dd0aa7cc2a8" containerName="volume-snapshot-controller"
	Oct 31 23:50:51 addons-798361 kubelet[1253]: I1031 23:50:51.570081    1253 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/818c1d3d-8f4e-4481-a761-9c45fd02d5fa-gcp-creds\") pod \"hello-world-app-5d77478584-cqnnz\" (UID: \"818c1d3d-8f4e-4481-a761-9c45fd02d5fa\") " pod="default/hello-world-app-5d77478584-cqnnz"
	Oct 31 23:50:51 addons-798361 kubelet[1253]: I1031 23:50:51.570200    1253 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8w79\" (UniqueName: \"kubernetes.io/projected/818c1d3d-8f4e-4481-a761-9c45fd02d5fa-kube-api-access-w8w79\") pod \"hello-world-app-5d77478584-cqnnz\" (UID: \"818c1d3d-8f4e-4481-a761-9c45fd02d5fa\") " pod="default/hello-world-app-5d77478584-cqnnz"
	Oct 31 23:50:52 addons-798361 kubelet[1253]: I1031 23:50:52.879366    1253 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnshx\" (UniqueName: \"kubernetes.io/projected/6db8a83e-76e9-449e-b3e2-c9bc0acf077f-kube-api-access-hnshx\") pod \"6db8a83e-76e9-449e-b3e2-c9bc0acf077f\" (UID: \"6db8a83e-76e9-449e-b3e2-c9bc0acf077f\") "
	Oct 31 23:50:52 addons-798361 kubelet[1253]: I1031 23:50:52.885750    1253 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6db8a83e-76e9-449e-b3e2-c9bc0acf077f-kube-api-access-hnshx" (OuterVolumeSpecName: "kube-api-access-hnshx") pod "6db8a83e-76e9-449e-b3e2-c9bc0acf077f" (UID: "6db8a83e-76e9-449e-b3e2-c9bc0acf077f"). InnerVolumeSpecName "kube-api-access-hnshx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 31 23:50:52 addons-798361 kubelet[1253]: I1031 23:50:52.980244    1253 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hnshx\" (UniqueName: \"kubernetes.io/projected/6db8a83e-76e9-449e-b3e2-c9bc0acf077f-kube-api-access-hnshx\") on node \"addons-798361\" DevicePath \"\""
	Oct 31 23:50:53 addons-798361 kubelet[1253]: I1031 23:50:53.160371    1253 scope.go:117] "RemoveContainer" containerID="60775d30c52057b7744a3a4021ec3e319f57a9f6bc6d65a2ec061a03b78f8ffd"
	Oct 31 23:50:53 addons-798361 kubelet[1253]: I1031 23:50:53.186012    1253 scope.go:117] "RemoveContainer" containerID="60775d30c52057b7744a3a4021ec3e319f57a9f6bc6d65a2ec061a03b78f8ffd"
	Oct 31 23:50:53 addons-798361 kubelet[1253]: E1031 23:50:53.189195    1253 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60775d30c52057b7744a3a4021ec3e319f57a9f6bc6d65a2ec061a03b78f8ffd\": container with ID starting with 60775d30c52057b7744a3a4021ec3e319f57a9f6bc6d65a2ec061a03b78f8ffd not found: ID does not exist" containerID="60775d30c52057b7744a3a4021ec3e319f57a9f6bc6d65a2ec061a03b78f8ffd"
	Oct 31 23:50:53 addons-798361 kubelet[1253]: I1031 23:50:53.189287    1253 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60775d30c52057b7744a3a4021ec3e319f57a9f6bc6d65a2ec061a03b78f8ffd"} err="failed to get container status \"60775d30c52057b7744a3a4021ec3e319f57a9f6bc6d65a2ec061a03b78f8ffd\": rpc error: code = NotFound desc = could not find container \"60775d30c52057b7744a3a4021ec3e319f57a9f6bc6d65a2ec061a03b78f8ffd\": container with ID starting with 60775d30c52057b7744a3a4021ec3e319f57a9f6bc6d65a2ec061a03b78f8ffd not found: ID does not exist"
	Oct 31 23:50:54 addons-798361 kubelet[1253]: I1031 23:50:54.970252    1253 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4fa0ac04-112c-473f-a008-ef4c0231c0c8" path="/var/lib/kubelet/pods/4fa0ac04-112c-473f-a008-ef4c0231c0c8/volumes"
	Oct 31 23:50:54 addons-798361 kubelet[1253]: I1031 23:50:54.970688    1253 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6db8a83e-76e9-449e-b3e2-c9bc0acf077f" path="/var/lib/kubelet/pods/6db8a83e-76e9-449e-b3e2-c9bc0acf077f/volumes"
	Oct 31 23:50:54 addons-798361 kubelet[1253]: I1031 23:50:54.971168    1253 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8d94da0e-b439-4d7d-a977-ba71eb74b3f4" path="/var/lib/kubelet/pods/8d94da0e-b439-4d7d-a977-ba71eb74b3f4/volumes"
	Oct 31 23:50:57 addons-798361 kubelet[1253]: I1031 23:50:57.412078    1253 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ttxh\" (UniqueName: \"kubernetes.io/projected/482cc58c-6342-4429-8b42-06f401ad25b9-kube-api-access-8ttxh\") pod \"482cc58c-6342-4429-8b42-06f401ad25b9\" (UID: \"482cc58c-6342-4429-8b42-06f401ad25b9\") "
	Oct 31 23:50:57 addons-798361 kubelet[1253]: I1031 23:50:57.412132    1253 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/482cc58c-6342-4429-8b42-06f401ad25b9-webhook-cert\") pod \"482cc58c-6342-4429-8b42-06f401ad25b9\" (UID: \"482cc58c-6342-4429-8b42-06f401ad25b9\") "
	Oct 31 23:50:57 addons-798361 kubelet[1253]: I1031 23:50:57.415502    1253 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/482cc58c-6342-4429-8b42-06f401ad25b9-kube-api-access-8ttxh" (OuterVolumeSpecName: "kube-api-access-8ttxh") pod "482cc58c-6342-4429-8b42-06f401ad25b9" (UID: "482cc58c-6342-4429-8b42-06f401ad25b9"). InnerVolumeSpecName "kube-api-access-8ttxh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 31 23:50:57 addons-798361 kubelet[1253]: I1031 23:50:57.417927    1253 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/482cc58c-6342-4429-8b42-06f401ad25b9-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "482cc58c-6342-4429-8b42-06f401ad25b9" (UID: "482cc58c-6342-4429-8b42-06f401ad25b9"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 31 23:50:57 addons-798361 kubelet[1253]: I1031 23:50:57.513032    1253 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/482cc58c-6342-4429-8b42-06f401ad25b9-webhook-cert\") on node \"addons-798361\" DevicePath \"\""
	Oct 31 23:50:57 addons-798361 kubelet[1253]: I1031 23:50:57.513070    1253 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8ttxh\" (UniqueName: \"kubernetes.io/projected/482cc58c-6342-4429-8b42-06f401ad25b9-kube-api-access-8ttxh\") on node \"addons-798361\" DevicePath \"\""
	Oct 31 23:50:58 addons-798361 kubelet[1253]: I1031 23:50:58.206045    1253 scope.go:117] "RemoveContainer" containerID="452cdde47c9b7e7b0096bdfa608d494852b6b54d21a25f91fc099422c2979984"
	Oct 31 23:50:58 addons-798361 kubelet[1253]: I1031 23:50:58.234795    1253 scope.go:117] "RemoveContainer" containerID="452cdde47c9b7e7b0096bdfa608d494852b6b54d21a25f91fc099422c2979984"
	Oct 31 23:50:58 addons-798361 kubelet[1253]: E1031 23:50:58.235475    1253 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"452cdde47c9b7e7b0096bdfa608d494852b6b54d21a25f91fc099422c2979984\": container with ID starting with 452cdde47c9b7e7b0096bdfa608d494852b6b54d21a25f91fc099422c2979984 not found: ID does not exist" containerID="452cdde47c9b7e7b0096bdfa608d494852b6b54d21a25f91fc099422c2979984"
	Oct 31 23:50:58 addons-798361 kubelet[1253]: I1031 23:50:58.235562    1253 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"452cdde47c9b7e7b0096bdfa608d494852b6b54d21a25f91fc099422c2979984"} err="failed to get container status \"452cdde47c9b7e7b0096bdfa608d494852b6b54d21a25f91fc099422c2979984\": rpc error: code = NotFound desc = could not find container \"452cdde47c9b7e7b0096bdfa608d494852b6b54d21a25f91fc099422c2979984\": container with ID starting with 452cdde47c9b7e7b0096bdfa608d494852b6b54d21a25f91fc099422c2979984 not found: ID does not exist"
	Oct 31 23:50:58 addons-798361 kubelet[1253]: I1031 23:50:58.970220    1253 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="482cc58c-6342-4429-8b42-06f401ad25b9" path="/var/lib/kubelet/pods/482cc58c-6342-4429-8b42-06f401ad25b9/volumes"
	
	* 
	* ==> storage-provisioner [2490e87b4d4d68581243a7f46b88b2c1d0d1de9d6e988f11632aaec86fcdbc07] <==
	* I1031 23:45:40.944571       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1031 23:46:10.995039       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> storage-provisioner [67e8c850920f78d77d5c549bb3affb9e5f5c59072e76b98136c1659f9d27f1c4] <==
	* I1031 23:46:12.538398       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1031 23:46:12.565067       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1031 23:46:12.566770       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1031 23:46:12.595676       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1031 23:46:12.597765       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-798361_f533621e-e319-45af-a88a-83599b1c87f7!
	I1031 23:46:12.600405       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"42641ad0-0bb3-4d7f-99a6-45b3b63be1b4", APIVersion:"v1", ResourceVersion:"916", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-798361_f533621e-e319-45af-a88a-83599b1c87f7 became leader
	I1031 23:46:12.699048       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-798361_f533621e-e319-45af-a88a-83599b1c87f7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-798361 -n addons-798361
helpers_test.go:261: (dbg) Run:  kubectl --context addons-798361 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (174.49s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (155.35s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-798361
addons_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-798361: exit status 82 (2m1.355119821s)

                                                
                                                
-- stdout --
	* Stopping node "addons-798361"  ...
	* Stopping node "addons-798361"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:173: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-798361" : exit status 82
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-798361
addons_test.go:175: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-798361: exit status 11 (21.709170142s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.214:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:177: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-798361" : exit status 11
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-798361
addons_test.go:179: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-798361: exit status 11 (6.143589993s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.214:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:181: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-798361" : exit status 11
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-798361
addons_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-798361: exit status 11 (6.142694546s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.214:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:186: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-798361" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (155.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 image load --daemon gcr.io/google-containers/addon-resizer:functional-736766 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-736766 image load --daemon gcr.io/google-containers/addon-resizer:functional-736766 --alsologtostderr: (3.937785542s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-736766" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (4.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-736766 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (2.350330039s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-736766 image ls: (2.320809983s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-736766" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (4.67s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (177.71s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-060181 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E1101 00:00:46.349516   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt: no such file or directory
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-060181 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (16.23244294s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-060181 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-060181 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [04f9bdf4-9b60-4627-ad0b-34a6e7e61432] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [04f9bdf4-9b60-4627-ad0b-34a6e7e61432] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.040796487s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-060181 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1101 00:02:16.006108   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/functional-736766/client.crt: no such file or directory
E1101 00:02:16.011411   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/functional-736766/client.crt: no such file or directory
E1101 00:02:16.021701   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/functional-736766/client.crt: no such file or directory
E1101 00:02:16.042027   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/functional-736766/client.crt: no such file or directory
E1101 00:02:16.082365   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/functional-736766/client.crt: no such file or directory
E1101 00:02:16.162708   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/functional-736766/client.crt: no such file or directory
E1101 00:02:16.323104   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/functional-736766/client.crt: no such file or directory
E1101 00:02:16.643799   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/functional-736766/client.crt: no such file or directory
E1101 00:02:17.284773   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/functional-736766/client.crt: no such file or directory
E1101 00:02:18.565273   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/functional-736766/client.crt: no such file or directory
E1101 00:02:21.127170   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/functional-736766/client.crt: no such file or directory
E1101 00:02:26.247375   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/functional-736766/client.crt: no such file or directory
E1101 00:02:36.488106   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/functional-736766/client.crt: no such file or directory
E1101 00:02:56.968444   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/functional-736766/client.crt: no such file or directory
E1101 00:03:02.507273   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt: no such file or directory
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-060181 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.469130301s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-060181 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-060181 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.88
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-060181 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-060181 addons disable ingress-dns --alsologtostderr -v=1: (7.476023262s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-060181 addons disable ingress --alsologtostderr -v=1
E1101 00:03:30.190062   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt: no such file or directory
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-060181 addons disable ingress --alsologtostderr -v=1: (7.548840481s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-060181 -n ingress-addon-legacy-060181
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-060181 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-060181 logs -n 25: (1.130658325s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|----------------------------------------------------------|-----------------------------|---------|----------------|---------------------|---------------------|
	|    Command     |                           Args                           |           Profile           |  User   |    Version     |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------|-----------------------------|---------|----------------|---------------------|---------------------|
	| ssh            | functional-736766 ssh findmnt                            | functional-736766           | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:57 UTC | 31 Oct 23 23:57 UTC |
	|                | -T /mount1                                               |                             |         |                |                     |                     |
	| ssh            | functional-736766 ssh findmnt                            | functional-736766           | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:57 UTC | 31 Oct 23 23:57 UTC |
	|                | -T /mount2                                               |                             |         |                |                     |                     |
	| ssh            | functional-736766 ssh findmnt                            | functional-736766           | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:57 UTC | 31 Oct 23 23:57 UTC |
	|                | -T /mount3                                               |                             |         |                |                     |                     |
	| mount          | -p functional-736766                                     | functional-736766           | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:57 UTC |                     |
	|                | --kill=true                                              |                             |         |                |                     |                     |
	| dashboard      | --url --port 36195                                       | functional-736766           | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:57 UTC | 31 Oct 23 23:58 UTC |
	|                | -p functional-736766                                     |                             |         |                |                     |                     |
	|                | --alsologtostderr -v=1                                   |                             |         |                |                     |                     |
	| image          | functional-736766 image ls                               | functional-736766           | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:57 UTC | 31 Oct 23 23:57 UTC |
	| image          | functional-736766 image save --daemon                    | functional-736766           | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:57 UTC | 31 Oct 23 23:57 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-736766 |                             |         |                |                     |                     |
	|                | --alsologtostderr                                        |                             |         |                |                     |                     |
	| update-context | functional-736766                                        | functional-736766           | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:57 UTC | 31 Oct 23 23:57 UTC |
	|                | update-context                                           |                             |         |                |                     |                     |
	|                | --alsologtostderr -v=2                                   |                             |         |                |                     |                     |
	| update-context | functional-736766                                        | functional-736766           | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:57 UTC | 31 Oct 23 23:57 UTC |
	|                | update-context                                           |                             |         |                |                     |                     |
	|                | --alsologtostderr -v=2                                   |                             |         |                |                     |                     |
	| update-context | functional-736766                                        | functional-736766           | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:57 UTC | 31 Oct 23 23:57 UTC |
	|                | update-context                                           |                             |         |                |                     |                     |
	|                | --alsologtostderr -v=2                                   |                             |         |                |                     |                     |
	| image          | functional-736766                                        | functional-736766           | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:57 UTC | 31 Oct 23 23:57 UTC |
	|                | image ls --format short                                  |                             |         |                |                     |                     |
	|                | --alsologtostderr                                        |                             |         |                |                     |                     |
	| image          | functional-736766                                        | functional-736766           | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:57 UTC | 31 Oct 23 23:57 UTC |
	|                | image ls --format yaml                                   |                             |         |                |                     |                     |
	|                | --alsologtostderr                                        |                             |         |                |                     |                     |
	| ssh            | functional-736766 ssh pgrep                              | functional-736766           | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:57 UTC |                     |
	|                | buildkitd                                                |                             |         |                |                     |                     |
	| image          | functional-736766 image build -t                         | functional-736766           | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:57 UTC | 31 Oct 23 23:58 UTC |
	|                | localhost/my-image:functional-736766                     |                             |         |                |                     |                     |
	|                | testdata/build --alsologtostderr                         |                             |         |                |                     |                     |
	| image          | functional-736766                                        | functional-736766           | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:58 UTC | 31 Oct 23 23:58 UTC |
	|                | image ls --format json                                   |                             |         |                |                     |                     |
	|                | --alsologtostderr                                        |                             |         |                |                     |                     |
	| image          | functional-736766                                        | functional-736766           | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:58 UTC | 31 Oct 23 23:58 UTC |
	|                | image ls --format table                                  |                             |         |                |                     |                     |
	|                | --alsologtostderr                                        |                             |         |                |                     |                     |
	| image          | functional-736766 image ls                               | functional-736766           | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:58 UTC | 31 Oct 23 23:58 UTC |
	| delete         | -p functional-736766                                     | functional-736766           | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:58 UTC | 31 Oct 23 23:58 UTC |
	| start          | -p ingress-addon-legacy-060181                           | ingress-addon-legacy-060181 | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:58 UTC | 01 Nov 23 00:00 UTC |
	|                | --kubernetes-version=v1.18.20                            |                             |         |                |                     |                     |
	|                | --memory=4096 --wait=true                                |                             |         |                |                     |                     |
	|                | --alsologtostderr                                        |                             |         |                |                     |                     |
	|                | -v=5 --driver=kvm2                                       |                             |         |                |                     |                     |
	|                | --container-runtime=crio                                 |                             |         |                |                     |                     |
	| addons         | ingress-addon-legacy-060181                              | ingress-addon-legacy-060181 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:00 UTC | 01 Nov 23 00:00 UTC |
	|                | addons enable ingress                                    |                             |         |                |                     |                     |
	|                | --alsologtostderr -v=5                                   |                             |         |                |                     |                     |
	| addons         | ingress-addon-legacy-060181                              | ingress-addon-legacy-060181 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:00 UTC | 01 Nov 23 00:00 UTC |
	|                | addons enable ingress-dns                                |                             |         |                |                     |                     |
	|                | --alsologtostderr -v=5                                   |                             |         |                |                     |                     |
	| ssh            | ingress-addon-legacy-060181                              | ingress-addon-legacy-060181 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:01 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                            |                             |         |                |                     |                     |
	|                | -H 'Host: nginx.example.com'                             |                             |         |                |                     |                     |
	| ip             | ingress-addon-legacy-060181 ip                           | ingress-addon-legacy-060181 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:03 UTC | 01 Nov 23 00:03 UTC |
	| addons         | ingress-addon-legacy-060181                              | ingress-addon-legacy-060181 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:03 UTC | 01 Nov 23 00:03 UTC |
	|                | addons disable ingress-dns                               |                             |         |                |                     |                     |
	|                | --alsologtostderr -v=1                                   |                             |         |                |                     |                     |
	| addons         | ingress-addon-legacy-060181                              | ingress-addon-legacy-060181 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:03 UTC | 01 Nov 23 00:03 UTC |
	|                | addons disable ingress                                   |                             |         |                |                     |                     |
	|                | --alsologtostderr -v=1                                   |                             |         |                |                     |                     |
	|----------------|----------------------------------------------------------|-----------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/31 23:58:06
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1031 23:58:06.304070   22836 out.go:296] Setting OutFile to fd 1 ...
	I1031 23:58:06.304351   22836 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 23:58:06.304361   22836 out.go:309] Setting ErrFile to fd 2...
	I1031 23:58:06.304365   22836 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 23:58:06.304551   22836 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7305/.minikube/bin
	I1031 23:58:06.305113   22836 out.go:303] Setting JSON to false
	I1031 23:58:06.305935   22836 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2432,"bootTime":1698794255,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 23:58:06.305991   22836 start.go:138] virtualization: kvm guest
	I1031 23:58:06.308471   22836 out.go:177] * [ingress-addon-legacy-060181] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1031 23:58:06.310278   22836 out.go:177]   - MINIKUBE_LOCATION=17486
	I1031 23:58:06.310328   22836 notify.go:220] Checking for updates...
	I1031 23:58:06.311520   22836 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 23:58:06.312935   22836 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1031 23:58:06.314503   22836 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7305/.minikube
	I1031 23:58:06.316174   22836 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 23:58:06.317749   22836 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1031 23:58:06.319303   22836 driver.go:378] Setting default libvirt URI to qemu:///system
	I1031 23:58:06.356237   22836 out.go:177] * Using the kvm2 driver based on user configuration
	I1031 23:58:06.357541   22836 start.go:298] selected driver: kvm2
	I1031 23:58:06.357554   22836 start.go:902] validating driver "kvm2" against <nil>
	I1031 23:58:06.357567   22836 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 23:58:06.358476   22836 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 23:58:06.358546   22836 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17486-7305/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1031 23:58:06.372809   22836 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1031 23:58:06.372860   22836 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1031 23:58:06.373084   22836 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1031 23:58:06.373155   22836 cni.go:84] Creating CNI manager for ""
	I1031 23:58:06.373168   22836 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 23:58:06.373177   22836 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1031 23:58:06.373190   22836 start_flags.go:323] config:
	{Name:ingress-addon-legacy-060181 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-060181 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 23:58:06.373339   22836 iso.go:125] acquiring lock: {Name:mk1f649ca0b7c1ae293cd66cb85f9eeda028b20b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 23:58:06.375233   22836 out.go:177] * Starting control plane node ingress-addon-legacy-060181 in cluster ingress-addon-legacy-060181
	I1031 23:58:06.376669   22836 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1031 23:58:06.477624   22836 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1031 23:58:06.477667   22836 cache.go:56] Caching tarball of preloaded images
	I1031 23:58:06.477815   22836 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1031 23:58:06.479754   22836 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1031 23:58:06.481391   22836 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1031 23:58:06.586316   22836 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1031 23:58:22.575363   22836 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1031 23:58:22.575464   22836 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1031 23:58:23.559099   22836 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1031 23:58:23.559408   22836 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/config.json ...
	I1031 23:58:23.559437   22836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/config.json: {Name:mkc4f055178e55dbedc7595cd4db98f3eec9372b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 23:58:23.559607   22836 start.go:365] acquiring machines lock for ingress-addon-legacy-060181: {Name:mk7aad88408c319111b9be8e59d9593a9e88374b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 23:58:23.559645   22836 start.go:369] acquired machines lock for "ingress-addon-legacy-060181" in 17.111µs
	I1031 23:58:23.559664   22836 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-060181 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-060181 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1031 23:58:23.559734   22836 start.go:125] createHost starting for "" (driver="kvm2")
	I1031 23:58:23.562925   22836 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1031 23:58:23.563062   22836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:58:23.563108   22836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:58:23.577289   22836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46141
	I1031 23:58:23.577743   22836 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:58:23.578402   22836 main.go:141] libmachine: Using API Version  1
	I1031 23:58:23.578432   22836 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:58:23.578834   22836 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:58:23.579080   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetMachineName
	I1031 23:58:23.579209   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .DriverName
	I1031 23:58:23.579394   22836 start.go:159] libmachine.API.Create for "ingress-addon-legacy-060181" (driver="kvm2")
	I1031 23:58:23.579425   22836 client.go:168] LocalClient.Create starting
	I1031 23:58:23.579466   22836 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem
	I1031 23:58:23.579515   22836 main.go:141] libmachine: Decoding PEM data...
	I1031 23:58:23.579532   22836 main.go:141] libmachine: Parsing certificate...
	I1031 23:58:23.579587   22836 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem
	I1031 23:58:23.579620   22836 main.go:141] libmachine: Decoding PEM data...
	I1031 23:58:23.579637   22836 main.go:141] libmachine: Parsing certificate...
	I1031 23:58:23.579654   22836 main.go:141] libmachine: Running pre-create checks...
	I1031 23:58:23.579667   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .PreCreateCheck
	I1031 23:58:23.580022   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetConfigRaw
	I1031 23:58:23.580527   22836 main.go:141] libmachine: Creating machine...
	I1031 23:58:23.580541   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .Create
	I1031 23:58:23.580714   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Creating KVM machine...
	I1031 23:58:23.582495   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | found existing default KVM network
	I1031 23:58:23.583210   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | I1031 23:58:23.583025   22903 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001478d0}
	I1031 23:58:23.589180   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | trying to create private KVM network mk-ingress-addon-legacy-060181 192.168.39.0/24...
	I1031 23:58:23.665124   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | private KVM network mk-ingress-addon-legacy-060181 192.168.39.0/24 created
	I1031 23:58:23.665159   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Setting up store path in /home/jenkins/minikube-integration/17486-7305/.minikube/machines/ingress-addon-legacy-060181 ...
	I1031 23:58:23.665173   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | I1031 23:58:23.665097   22903 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17486-7305/.minikube
	I1031 23:58:23.665186   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Building disk image from file:///home/jenkins/minikube-integration/17486-7305/.minikube/cache/iso/amd64/minikube-v1.32.0-1698773592-17486-amd64.iso
	I1031 23:58:23.665290   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Downloading /home/jenkins/minikube-integration/17486-7305/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17486-7305/.minikube/cache/iso/amd64/minikube-v1.32.0-1698773592-17486-amd64.iso...
	I1031 23:58:23.877450   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | I1031 23:58:23.877316   22903 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/ingress-addon-legacy-060181/id_rsa...
	I1031 23:58:23.976760   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | I1031 23:58:23.976587   22903 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/ingress-addon-legacy-060181/ingress-addon-legacy-060181.rawdisk...
	I1031 23:58:23.976794   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | Writing magic tar header
	I1031 23:58:23.976807   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | Writing SSH key tar header
	I1031 23:58:23.976816   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | I1031 23:58:23.976706   22903 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17486-7305/.minikube/machines/ingress-addon-legacy-060181 ...
	I1031 23:58:23.976829   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/ingress-addon-legacy-060181
	I1031 23:58:23.976885   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Setting executable bit set on /home/jenkins/minikube-integration/17486-7305/.minikube/machines/ingress-addon-legacy-060181 (perms=drwx------)
	I1031 23:58:23.976901   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17486-7305/.minikube/machines
	I1031 23:58:23.976909   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Setting executable bit set on /home/jenkins/minikube-integration/17486-7305/.minikube/machines (perms=drwxr-xr-x)
	I1031 23:58:23.976922   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Setting executable bit set on /home/jenkins/minikube-integration/17486-7305/.minikube (perms=drwxr-xr-x)
	I1031 23:58:23.976931   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Setting executable bit set on /home/jenkins/minikube-integration/17486-7305 (perms=drwxrwxr-x)
	I1031 23:58:23.976938   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17486-7305/.minikube
	I1031 23:58:23.976951   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17486-7305
	I1031 23:58:23.976958   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1031 23:58:23.976970   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | Checking permissions on dir: /home/jenkins
	I1031 23:58:23.976979   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | Checking permissions on dir: /home
	I1031 23:58:23.976997   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | Skipping /home - not owner
	I1031 23:58:23.977012   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1031 23:58:23.977023   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1031 23:58:23.977030   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Creating domain...
	I1031 23:58:23.978230   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) define libvirt domain using xml: 
	I1031 23:58:23.978261   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) <domain type='kvm'>
	I1031 23:58:23.978275   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)   <name>ingress-addon-legacy-060181</name>
	I1031 23:58:23.978288   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)   <memory unit='MiB'>4096</memory>
	I1031 23:58:23.978300   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)   <vcpu>2</vcpu>
	I1031 23:58:23.978308   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)   <features>
	I1031 23:58:23.978315   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)     <acpi/>
	I1031 23:58:23.978321   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)     <apic/>
	I1031 23:58:23.978332   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)     <pae/>
	I1031 23:58:23.978351   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)     
	I1031 23:58:23.978377   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)   </features>
	I1031 23:58:23.978392   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)   <cpu mode='host-passthrough'>
	I1031 23:58:23.978404   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)   
	I1031 23:58:23.978410   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)   </cpu>
	I1031 23:58:23.978419   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)   <os>
	I1031 23:58:23.978433   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)     <type>hvm</type>
	I1031 23:58:23.978443   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)     <boot dev='cdrom'/>
	I1031 23:58:23.978456   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)     <boot dev='hd'/>
	I1031 23:58:23.978507   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)     <bootmenu enable='no'/>
	I1031 23:58:23.978526   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)   </os>
	I1031 23:58:23.978533   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)   <devices>
	I1031 23:58:23.978540   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)     <disk type='file' device='cdrom'>
	I1031 23:58:23.978551   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)       <source file='/home/jenkins/minikube-integration/17486-7305/.minikube/machines/ingress-addon-legacy-060181/boot2docker.iso'/>
	I1031 23:58:23.978558   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)       <target dev='hdc' bus='scsi'/>
	I1031 23:58:23.978565   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)       <readonly/>
	I1031 23:58:23.978570   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)     </disk>
	I1031 23:58:23.978577   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)     <disk type='file' device='disk'>
	I1031 23:58:23.978584   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1031 23:58:23.978633   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)       <source file='/home/jenkins/minikube-integration/17486-7305/.minikube/machines/ingress-addon-legacy-060181/ingress-addon-legacy-060181.rawdisk'/>
	I1031 23:58:23.978673   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)       <target dev='hda' bus='virtio'/>
	I1031 23:58:23.978688   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)     </disk>
	I1031 23:58:23.978703   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)     <interface type='network'>
	I1031 23:58:23.978717   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)       <source network='mk-ingress-addon-legacy-060181'/>
	I1031 23:58:23.978731   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)       <model type='virtio'/>
	I1031 23:58:23.978746   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)     </interface>
	I1031 23:58:23.978761   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)     <interface type='network'>
	I1031 23:58:23.978777   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)       <source network='default'/>
	I1031 23:58:23.978795   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)       <model type='virtio'/>
	I1031 23:58:23.978810   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)     </interface>
	I1031 23:58:23.978825   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)     <serial type='pty'>
	I1031 23:58:23.978840   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)       <target port='0'/>
	I1031 23:58:23.978854   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)     </serial>
	I1031 23:58:23.978885   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)     <console type='pty'>
	I1031 23:58:23.978916   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)       <target type='serial' port='0'/>
	I1031 23:58:23.978936   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)     </console>
	I1031 23:58:23.978953   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)     <rng model='virtio'>
	I1031 23:58:23.978970   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)       <backend model='random'>/dev/random</backend>
	I1031 23:58:23.978979   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)     </rng>
	I1031 23:58:23.978988   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)     
	I1031 23:58:23.978996   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)     
	I1031 23:58:23.979007   22836 main.go:141] libmachine: (ingress-addon-legacy-060181)   </devices>
	I1031 23:58:23.979019   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) </domain>
	I1031 23:58:23.979043   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) 
	I1031 23:58:23.983704   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined MAC address 52:54:00:7a:2c:fe in network default
	I1031 23:58:23.984411   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Ensuring networks are active...
	I1031 23:58:23.984437   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:23.985635   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Ensuring network default is active
	I1031 23:58:23.986188   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Ensuring network mk-ingress-addon-legacy-060181 is active
	I1031 23:58:23.987162   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Getting domain xml...
	I1031 23:58:23.988031   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Creating domain...
	I1031 23:58:25.257634   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Waiting to get IP...
	I1031 23:58:25.258460   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:25.258899   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | unable to find current IP address of domain ingress-addon-legacy-060181 in network mk-ingress-addon-legacy-060181
	I1031 23:58:25.258946   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | I1031 23:58:25.258873   22903 retry.go:31] will retry after 289.448126ms: waiting for machine to come up
	I1031 23:58:25.550294   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:25.550800   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | unable to find current IP address of domain ingress-addon-legacy-060181 in network mk-ingress-addon-legacy-060181
	I1031 23:58:25.550823   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | I1031 23:58:25.550745   22903 retry.go:31] will retry after 333.745964ms: waiting for machine to come up
	I1031 23:58:25.886255   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:25.886764   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | unable to find current IP address of domain ingress-addon-legacy-060181 in network mk-ingress-addon-legacy-060181
	I1031 23:58:25.886789   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | I1031 23:58:25.886729   22903 retry.go:31] will retry after 473.013292ms: waiting for machine to come up
	I1031 23:58:26.361371   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:26.361910   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | unable to find current IP address of domain ingress-addon-legacy-060181 in network mk-ingress-addon-legacy-060181
	I1031 23:58:26.361942   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | I1031 23:58:26.361855   22903 retry.go:31] will retry after 372.325194ms: waiting for machine to come up
	I1031 23:58:26.735322   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:26.735813   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | unable to find current IP address of domain ingress-addon-legacy-060181 in network mk-ingress-addon-legacy-060181
	I1031 23:58:26.735843   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | I1031 23:58:26.735763   22903 retry.go:31] will retry after 675.688076ms: waiting for machine to come up
	I1031 23:58:27.412719   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:27.413249   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | unable to find current IP address of domain ingress-addon-legacy-060181 in network mk-ingress-addon-legacy-060181
	I1031 23:58:27.413276   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | I1031 23:58:27.413209   22903 retry.go:31] will retry after 616.936942ms: waiting for machine to come up
	I1031 23:58:28.032363   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:28.032868   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | unable to find current IP address of domain ingress-addon-legacy-060181 in network mk-ingress-addon-legacy-060181
	I1031 23:58:28.032911   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | I1031 23:58:28.032798   22903 retry.go:31] will retry after 994.318424ms: waiting for machine to come up
	I1031 23:58:29.029915   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:29.030694   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | unable to find current IP address of domain ingress-addon-legacy-060181 in network mk-ingress-addon-legacy-060181
	I1031 23:58:29.030745   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | I1031 23:58:29.030605   22903 retry.go:31] will retry after 1.486937765s: waiting for machine to come up
	I1031 23:58:30.519375   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:30.519867   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | unable to find current IP address of domain ingress-addon-legacy-060181 in network mk-ingress-addon-legacy-060181
	I1031 23:58:30.519894   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | I1031 23:58:30.519788   22903 retry.go:31] will retry after 1.476966491s: waiting for machine to come up
	I1031 23:58:31.998120   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:31.998516   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | unable to find current IP address of domain ingress-addon-legacy-060181 in network mk-ingress-addon-legacy-060181
	I1031 23:58:31.998536   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | I1031 23:58:31.998470   22903 retry.go:31] will retry after 1.629247557s: waiting for machine to come up
	I1031 23:58:33.628815   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:33.629417   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | unable to find current IP address of domain ingress-addon-legacy-060181 in network mk-ingress-addon-legacy-060181
	I1031 23:58:33.629449   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | I1031 23:58:33.629356   22903 retry.go:31] will retry after 2.000260791s: waiting for machine to come up
	I1031 23:58:35.632663   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:35.633058   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | unable to find current IP address of domain ingress-addon-legacy-060181 in network mk-ingress-addon-legacy-060181
	I1031 23:58:35.633089   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | I1031 23:58:35.633031   22903 retry.go:31] will retry after 2.856200119s: waiting for machine to come up
	I1031 23:58:38.492960   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:38.493305   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | unable to find current IP address of domain ingress-addon-legacy-060181 in network mk-ingress-addon-legacy-060181
	I1031 23:58:38.493329   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | I1031 23:58:38.493264   22903 retry.go:31] will retry after 3.991946861s: waiting for machine to come up
	I1031 23:58:42.489847   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:42.490237   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | unable to find current IP address of domain ingress-addon-legacy-060181 in network mk-ingress-addon-legacy-060181
	I1031 23:58:42.490267   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | I1031 23:58:42.490190   22903 retry.go:31] will retry after 3.805540015s: waiting for machine to come up
	I1031 23:58:46.300149   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:46.300920   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Found IP for machine: 192.168.39.88
	I1031 23:58:46.300957   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has current primary IP address 192.168.39.88 and MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:46.300964   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Reserving static IP address...
	I1031 23:58:46.301341   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-060181", mac: "52:54:00:85:db:50", ip: "192.168.39.88"} in network mk-ingress-addon-legacy-060181
	I1031 23:58:46.373310   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | Getting to WaitForSSH function...
	I1031 23:58:46.373343   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Reserved static IP address: 192.168.39.88
	I1031 23:58:46.373358   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Waiting for SSH to be available...
	I1031 23:58:46.375846   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:46.376145   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:85:db:50", ip: ""} in network mk-ingress-addon-legacy-060181
	I1031 23:58:46.376179   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | unable to find defined IP address of network mk-ingress-addon-legacy-060181 interface with MAC address 52:54:00:85:db:50
	I1031 23:58:46.376296   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | Using SSH client type: external
	I1031 23:58:46.376329   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/ingress-addon-legacy-060181/id_rsa (-rw-------)
	I1031 23:58:46.376377   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7305/.minikube/machines/ingress-addon-legacy-060181/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 23:58:46.376397   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | About to run SSH command:
	I1031 23:58:46.376514   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | exit 0
	I1031 23:58:46.380181   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | SSH cmd err, output: exit status 255: 
	I1031 23:58:46.380203   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1031 23:58:46.380211   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | command : exit 0
	I1031 23:58:46.380219   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | err     : exit status 255
	I1031 23:58:46.380231   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | output  : 
	I1031 23:58:49.380668   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | Getting to WaitForSSH function...
	I1031 23:58:49.383148   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:49.383549   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:db:50", ip: ""} in network mk-ingress-addon-legacy-060181: {Iface:virbr1 ExpiryTime:2023-11-01 00:58:39 +0000 UTC Type:0 Mac:52:54:00:85:db:50 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:ingress-addon-legacy-060181 Clientid:01:52:54:00:85:db:50}
	I1031 23:58:49.383577   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined IP address 192.168.39.88 and MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:49.383745   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | Using SSH client type: external
	I1031 23:58:49.383764   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/ingress-addon-legacy-060181/id_rsa (-rw-------)
	I1031 23:58:49.383782   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.88 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7305/.minikube/machines/ingress-addon-legacy-060181/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 23:58:49.383790   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | About to run SSH command:
	I1031 23:58:49.383799   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | exit 0
	I1031 23:58:49.471669   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | SSH cmd err, output: <nil>: 
	I1031 23:58:49.471958   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) KVM machine creation complete!
	I1031 23:58:49.472330   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetConfigRaw
	I1031 23:58:49.472860   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .DriverName
	I1031 23:58:49.473033   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .DriverName
	I1031 23:58:49.473217   22836 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1031 23:58:49.473231   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetState
	I1031 23:58:49.474338   22836 main.go:141] libmachine: Detecting operating system of created instance...
	I1031 23:58:49.474353   22836 main.go:141] libmachine: Waiting for SSH to be available...
	I1031 23:58:49.474362   22836 main.go:141] libmachine: Getting to WaitForSSH function...
	I1031 23:58:49.474372   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHHostname
	I1031 23:58:49.476464   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:49.476784   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:db:50", ip: ""} in network mk-ingress-addon-legacy-060181: {Iface:virbr1 ExpiryTime:2023-11-01 00:58:39 +0000 UTC Type:0 Mac:52:54:00:85:db:50 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:ingress-addon-legacy-060181 Clientid:01:52:54:00:85:db:50}
	I1031 23:58:49.476815   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined IP address 192.168.39.88 and MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:49.476913   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHPort
	I1031 23:58:49.477074   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHKeyPath
	I1031 23:58:49.477221   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHKeyPath
	I1031 23:58:49.477440   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHUsername
	I1031 23:58:49.477625   22836 main.go:141] libmachine: Using SSH client type: native
	I1031 23:58:49.477957   22836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I1031 23:58:49.477971   22836 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1031 23:58:49.587289   22836 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 23:58:49.587315   22836 main.go:141] libmachine: Detecting the provisioner...
	I1031 23:58:49.587325   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHHostname
	I1031 23:58:49.589892   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:49.590247   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:db:50", ip: ""} in network mk-ingress-addon-legacy-060181: {Iface:virbr1 ExpiryTime:2023-11-01 00:58:39 +0000 UTC Type:0 Mac:52:54:00:85:db:50 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:ingress-addon-legacy-060181 Clientid:01:52:54:00:85:db:50}
	I1031 23:58:49.590272   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined IP address 192.168.39.88 and MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:49.590416   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHPort
	I1031 23:58:49.590668   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHKeyPath
	I1031 23:58:49.590822   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHKeyPath
	I1031 23:58:49.590949   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHUsername
	I1031 23:58:49.591102   22836 main.go:141] libmachine: Using SSH client type: native
	I1031 23:58:49.591477   22836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I1031 23:58:49.591491   22836 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1031 23:58:49.700833   22836 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g0cee705-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1031 23:58:49.700922   22836 main.go:141] libmachine: found compatible host: buildroot
	I1031 23:58:49.700934   22836 main.go:141] libmachine: Provisioning with buildroot...
	I1031 23:58:49.700942   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetMachineName
	I1031 23:58:49.701234   22836 buildroot.go:166] provisioning hostname "ingress-addon-legacy-060181"
	I1031 23:58:49.701260   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetMachineName
	I1031 23:58:49.701480   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHHostname
	I1031 23:58:49.704524   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:49.704917   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:db:50", ip: ""} in network mk-ingress-addon-legacy-060181: {Iface:virbr1 ExpiryTime:2023-11-01 00:58:39 +0000 UTC Type:0 Mac:52:54:00:85:db:50 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:ingress-addon-legacy-060181 Clientid:01:52:54:00:85:db:50}
	I1031 23:58:49.704952   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined IP address 192.168.39.88 and MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:49.705089   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHPort
	I1031 23:58:49.705308   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHKeyPath
	I1031 23:58:49.705482   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHKeyPath
	I1031 23:58:49.705679   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHUsername
	I1031 23:58:49.705840   22836 main.go:141] libmachine: Using SSH client type: native
	I1031 23:58:49.706163   22836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I1031 23:58:49.706177   22836 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-060181 && echo "ingress-addon-legacy-060181" | sudo tee /etc/hostname
	I1031 23:58:49.832117   22836 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-060181
	
	I1031 23:58:49.832149   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHHostname
	I1031 23:58:49.835044   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:49.835381   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:db:50", ip: ""} in network mk-ingress-addon-legacy-060181: {Iface:virbr1 ExpiryTime:2023-11-01 00:58:39 +0000 UTC Type:0 Mac:52:54:00:85:db:50 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:ingress-addon-legacy-060181 Clientid:01:52:54:00:85:db:50}
	I1031 23:58:49.835415   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined IP address 192.168.39.88 and MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:49.835507   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHPort
	I1031 23:58:49.835676   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHKeyPath
	I1031 23:58:49.835828   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHKeyPath
	I1031 23:58:49.836019   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHUsername
	I1031 23:58:49.836187   22836 main.go:141] libmachine: Using SSH client type: native
	I1031 23:58:49.836480   22836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I1031 23:58:49.836501   22836 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-060181' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-060181/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-060181' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 23:58:49.955670   22836 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 23:58:49.955709   22836 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7305/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7305/.minikube}
	I1031 23:58:49.955763   22836 buildroot.go:174] setting up certificates
	I1031 23:58:49.955775   22836 provision.go:83] configureAuth start
	I1031 23:58:49.955788   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetMachineName
	I1031 23:58:49.956175   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetIP
	I1031 23:58:49.958995   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:49.959358   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:db:50", ip: ""} in network mk-ingress-addon-legacy-060181: {Iface:virbr1 ExpiryTime:2023-11-01 00:58:39 +0000 UTC Type:0 Mac:52:54:00:85:db:50 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:ingress-addon-legacy-060181 Clientid:01:52:54:00:85:db:50}
	I1031 23:58:49.959390   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined IP address 192.168.39.88 and MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:49.959510   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHHostname
	I1031 23:58:49.961750   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:49.962107   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:db:50", ip: ""} in network mk-ingress-addon-legacy-060181: {Iface:virbr1 ExpiryTime:2023-11-01 00:58:39 +0000 UTC Type:0 Mac:52:54:00:85:db:50 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:ingress-addon-legacy-060181 Clientid:01:52:54:00:85:db:50}
	I1031 23:58:49.962133   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined IP address 192.168.39.88 and MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:49.962253   22836 provision.go:138] copyHostCerts
	I1031 23:58:49.962287   22836 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1031 23:58:49.962327   22836 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem, removing ...
	I1031 23:58:49.962343   22836 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1031 23:58:49.962424   22836 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem (1078 bytes)
	I1031 23:58:49.962517   22836 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1031 23:58:49.962542   22836 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem, removing ...
	I1031 23:58:49.962551   22836 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1031 23:58:49.962586   22836 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem (1123 bytes)
	I1031 23:58:49.962647   22836 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1031 23:58:49.962671   22836 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem, removing ...
	I1031 23:58:49.962680   22836 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1031 23:58:49.962708   22836 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem (1675 bytes)
	I1031 23:58:49.962770   22836 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-060181 san=[192.168.39.88 192.168.39.88 localhost 127.0.0.1 minikube ingress-addon-legacy-060181]
	I1031 23:58:50.112497   22836 provision.go:172] copyRemoteCerts
	I1031 23:58:50.112555   22836 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 23:58:50.112577   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHHostname
	I1031 23:58:50.115374   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:50.115805   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:db:50", ip: ""} in network mk-ingress-addon-legacy-060181: {Iface:virbr1 ExpiryTime:2023-11-01 00:58:39 +0000 UTC Type:0 Mac:52:54:00:85:db:50 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:ingress-addon-legacy-060181 Clientid:01:52:54:00:85:db:50}
	I1031 23:58:50.115843   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined IP address 192.168.39.88 and MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:50.116034   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHPort
	I1031 23:58:50.116228   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHKeyPath
	I1031 23:58:50.116378   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHUsername
	I1031 23:58:50.116517   22836 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/ingress-addon-legacy-060181/id_rsa Username:docker}
	I1031 23:58:50.200439   22836 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1031 23:58:50.200524   22836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1031 23:58:50.223645   22836 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1031 23:58:50.223730   22836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1031 23:58:50.245408   22836 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1031 23:58:50.245510   22836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1031 23:58:50.267701   22836 provision.go:86] duration metric: configureAuth took 311.911508ms
	I1031 23:58:50.267726   22836 buildroot.go:189] setting minikube options for container-runtime
	I1031 23:58:50.267922   22836 config.go:182] Loaded profile config "ingress-addon-legacy-060181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1031 23:58:50.268028   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHHostname
	I1031 23:58:50.271436   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:50.271773   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:db:50", ip: ""} in network mk-ingress-addon-legacy-060181: {Iface:virbr1 ExpiryTime:2023-11-01 00:58:39 +0000 UTC Type:0 Mac:52:54:00:85:db:50 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:ingress-addon-legacy-060181 Clientid:01:52:54:00:85:db:50}
	I1031 23:58:50.271811   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined IP address 192.168.39.88 and MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:50.272028   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHPort
	I1031 23:58:50.272262   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHKeyPath
	I1031 23:58:50.272427   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHKeyPath
	I1031 23:58:50.272581   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHUsername
	I1031 23:58:50.272740   22836 main.go:141] libmachine: Using SSH client type: native
	I1031 23:58:50.273206   22836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I1031 23:58:50.273232   22836 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1031 23:58:50.570013   22836 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1031 23:58:50.570042   22836 main.go:141] libmachine: Checking connection to Docker...
	I1031 23:58:50.570054   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetURL
	I1031 23:58:50.571247   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | Using libvirt version 6000000
	I1031 23:58:50.573289   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:50.573601   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:db:50", ip: ""} in network mk-ingress-addon-legacy-060181: {Iface:virbr1 ExpiryTime:2023-11-01 00:58:39 +0000 UTC Type:0 Mac:52:54:00:85:db:50 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:ingress-addon-legacy-060181 Clientid:01:52:54:00:85:db:50}
	I1031 23:58:50.573634   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined IP address 192.168.39.88 and MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:50.573755   22836 main.go:141] libmachine: Docker is up and running!
	I1031 23:58:50.573790   22836 main.go:141] libmachine: Reticulating splines...
	I1031 23:58:50.573801   22836 client.go:171] LocalClient.Create took 26.99436556s
	I1031 23:58:50.573827   22836 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-060181" took 26.994433236s
	I1031 23:58:50.573840   22836 start.go:300] post-start starting for "ingress-addon-legacy-060181" (driver="kvm2")
	I1031 23:58:50.573856   22836 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 23:58:50.573880   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .DriverName
	I1031 23:58:50.574087   22836 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 23:58:50.574104   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHHostname
	I1031 23:58:50.576380   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:50.576708   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:db:50", ip: ""} in network mk-ingress-addon-legacy-060181: {Iface:virbr1 ExpiryTime:2023-11-01 00:58:39 +0000 UTC Type:0 Mac:52:54:00:85:db:50 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:ingress-addon-legacy-060181 Clientid:01:52:54:00:85:db:50}
	I1031 23:58:50.576736   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined IP address 192.168.39.88 and MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:50.576867   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHPort
	I1031 23:58:50.577042   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHKeyPath
	I1031 23:58:50.577200   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHUsername
	I1031 23:58:50.577342   22836 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/ingress-addon-legacy-060181/id_rsa Username:docker}
	I1031 23:58:50.661200   22836 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 23:58:50.665475   22836 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 23:58:50.665494   22836 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/addons for local assets ...
	I1031 23:58:50.665578   22836 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/files for local assets ...
	I1031 23:58:50.665665   22836 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> 145042.pem in /etc/ssl/certs
	I1031 23:58:50.665677   22836 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> /etc/ssl/certs/145042.pem
	I1031 23:58:50.665780   22836 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 23:58:50.674082   22836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /etc/ssl/certs/145042.pem (1708 bytes)
	I1031 23:58:50.699262   22836 start.go:303] post-start completed in 125.405935ms
	I1031 23:58:50.699313   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetConfigRaw
	I1031 23:58:50.699879   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetIP
	I1031 23:58:50.702334   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:50.702758   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:db:50", ip: ""} in network mk-ingress-addon-legacy-060181: {Iface:virbr1 ExpiryTime:2023-11-01 00:58:39 +0000 UTC Type:0 Mac:52:54:00:85:db:50 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:ingress-addon-legacy-060181 Clientid:01:52:54:00:85:db:50}
	I1031 23:58:50.702792   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined IP address 192.168.39.88 and MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:50.702989   22836 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/config.json ...
	I1031 23:58:50.703163   22836 start.go:128] duration metric: createHost completed in 27.143417387s
	I1031 23:58:50.703185   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHHostname
	I1031 23:58:50.705301   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:50.705621   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:db:50", ip: ""} in network mk-ingress-addon-legacy-060181: {Iface:virbr1 ExpiryTime:2023-11-01 00:58:39 +0000 UTC Type:0 Mac:52:54:00:85:db:50 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:ingress-addon-legacy-060181 Clientid:01:52:54:00:85:db:50}
	I1031 23:58:50.705655   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined IP address 192.168.39.88 and MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:50.705787   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHPort
	I1031 23:58:50.705999   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHKeyPath
	I1031 23:58:50.706140   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHKeyPath
	I1031 23:58:50.706280   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHUsername
	I1031 23:58:50.706414   22836 main.go:141] libmachine: Using SSH client type: native
	I1031 23:58:50.706699   22836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I1031 23:58:50.706710   22836 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 23:58:50.820720   22836 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698796730.797094898
	
	I1031 23:58:50.820740   22836 fix.go:206] guest clock: 1698796730.797094898
	I1031 23:58:50.820747   22836 fix.go:219] Guest: 2023-10-31 23:58:50.797094898 +0000 UTC Remote: 2023-10-31 23:58:50.703174471 +0000 UTC m=+44.449002594 (delta=93.920427ms)
	I1031 23:58:50.820779   22836 fix.go:190] guest clock delta is within tolerance: 93.920427ms
	I1031 23:58:50.820786   22836 start.go:83] releasing machines lock for "ingress-addon-legacy-060181", held for 27.261131858s
	I1031 23:58:50.820809   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .DriverName
	I1031 23:58:50.821047   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetIP
	I1031 23:58:50.823734   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:50.824111   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:db:50", ip: ""} in network mk-ingress-addon-legacy-060181: {Iface:virbr1 ExpiryTime:2023-11-01 00:58:39 +0000 UTC Type:0 Mac:52:54:00:85:db:50 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:ingress-addon-legacy-060181 Clientid:01:52:54:00:85:db:50}
	I1031 23:58:50.824144   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined IP address 192.168.39.88 and MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:50.824283   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .DriverName
	I1031 23:58:50.824742   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .DriverName
	I1031 23:58:50.824903   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .DriverName
	I1031 23:58:50.824979   22836 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 23:58:50.825011   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHHostname
	I1031 23:58:50.825102   22836 ssh_runner.go:195] Run: cat /version.json
	I1031 23:58:50.825128   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHHostname
	I1031 23:58:50.827549   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:50.827666   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:50.827910   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:db:50", ip: ""} in network mk-ingress-addon-legacy-060181: {Iface:virbr1 ExpiryTime:2023-11-01 00:58:39 +0000 UTC Type:0 Mac:52:54:00:85:db:50 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:ingress-addon-legacy-060181 Clientid:01:52:54:00:85:db:50}
	I1031 23:58:50.827953   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined IP address 192.168.39.88 and MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:50.828091   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHPort
	I1031 23:58:50.828210   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:db:50", ip: ""} in network mk-ingress-addon-legacy-060181: {Iface:virbr1 ExpiryTime:2023-11-01 00:58:39 +0000 UTC Type:0 Mac:52:54:00:85:db:50 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:ingress-addon-legacy-060181 Clientid:01:52:54:00:85:db:50}
	I1031 23:58:50.828238   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHKeyPath
	I1031 23:58:50.828289   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined IP address 192.168.39.88 and MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:50.828381   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHPort
	I1031 23:58:50.828464   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHUsername
	I1031 23:58:50.828581   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHKeyPath
	I1031 23:58:50.828657   22836 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/ingress-addon-legacy-060181/id_rsa Username:docker}
	I1031 23:58:50.828695   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHUsername
	I1031 23:58:50.828798   22836 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/ingress-addon-legacy-060181/id_rsa Username:docker}
	I1031 23:58:50.908329   22836 ssh_runner.go:195] Run: systemctl --version
	I1031 23:58:50.947465   22836 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1031 23:58:51.108918   22836 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1031 23:58:51.114590   22836 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 23:58:51.114658   22836 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 23:58:51.129362   22836 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 23:58:51.129387   22836 start.go:472] detecting cgroup driver to use...
	I1031 23:58:51.129445   22836 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 23:58:51.146061   22836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 23:58:51.158293   22836 docker.go:204] disabling cri-docker service (if available) ...
	I1031 23:58:51.158360   22836 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1031 23:58:51.170327   22836 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1031 23:58:51.183521   22836 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1031 23:58:51.293820   22836 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1031 23:58:51.418077   22836 docker.go:220] disabling docker service ...
	I1031 23:58:51.418147   22836 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1031 23:58:51.430985   22836 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1031 23:58:51.442241   22836 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1031 23:58:51.562472   22836 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1031 23:58:51.683683   22836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1031 23:58:51.696215   22836 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 23:58:51.712634   22836 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1031 23:58:51.712706   22836 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 23:58:51.721960   22836 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1031 23:58:51.722024   22836 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 23:58:51.731074   22836 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 23:58:51.740165   22836 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 23:58:51.749139   22836 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 23:58:51.758433   22836 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 23:58:51.766699   22836 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1031 23:58:51.766758   22836 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1031 23:58:51.778281   22836 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 23:58:51.786811   22836 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 23:58:51.892962   22836 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1031 23:58:52.050967   22836 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1031 23:58:52.051058   22836 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1031 23:58:52.057737   22836 start.go:540] Will wait 60s for crictl version
	I1031 23:58:52.057804   22836 ssh_runner.go:195] Run: which crictl
	I1031 23:58:52.061344   22836 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 23:58:52.108957   22836 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1031 23:58:52.109040   22836 ssh_runner.go:195] Run: crio --version
	I1031 23:58:52.152978   22836 ssh_runner.go:195] Run: crio --version
	I1031 23:58:52.201070   22836 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.1 ...
	I1031 23:58:52.202693   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetIP
	I1031 23:58:52.205512   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:52.205912   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:db:50", ip: ""} in network mk-ingress-addon-legacy-060181: {Iface:virbr1 ExpiryTime:2023-11-01 00:58:39 +0000 UTC Type:0 Mac:52:54:00:85:db:50 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:ingress-addon-legacy-060181 Clientid:01:52:54:00:85:db:50}
	I1031 23:58:52.205945   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined IP address 192.168.39.88 and MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:58:52.206205   22836 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1031 23:58:52.210050   22836 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 23:58:52.221484   22836 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1031 23:58:52.221532   22836 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 23:58:52.254992   22836 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1031 23:58:52.255052   22836 ssh_runner.go:195] Run: which lz4
	I1031 23:58:52.258557   22836 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1031 23:58:52.258659   22836 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1031 23:58:52.262601   22836 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1031 23:58:52.262634   22836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I1031 23:58:53.944703   22836 crio.go:444] Took 1.686055 seconds to copy over tarball
	I1031 23:58:53.944776   22836 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1031 23:58:57.053284   22836 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.108479557s)
	I1031 23:58:57.053315   22836 crio.go:451] Took 3.108585 seconds to extract the tarball
	I1031 23:58:57.053327   22836 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1031 23:58:57.097592   22836 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 23:58:57.147677   22836 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1031 23:58:57.147699   22836 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1031 23:58:57.147762   22836 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 23:58:57.147792   22836 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1031 23:58:57.147826   22836 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1031 23:58:57.147836   22836 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1031 23:58:57.147885   22836 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1031 23:58:57.147915   22836 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1031 23:58:57.147984   22836 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1031 23:58:57.148016   22836 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1031 23:58:57.149023   22836 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1031 23:58:57.149033   22836 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1031 23:58:57.149036   22836 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 23:58:57.149123   22836 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1031 23:58:57.149160   22836 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1031 23:58:57.149237   22836 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1031 23:58:57.149291   22836 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1031 23:58:57.149359   22836 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1031 23:58:57.371026   22836 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1031 23:58:57.375866   22836 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1031 23:58:57.378896   22836 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1031 23:58:57.384668   22836 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1031 23:58:57.386465   22836 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1031 23:58:57.386534   22836 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1031 23:58:57.392347   22836 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1031 23:58:57.503973   22836 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1031 23:58:57.504018   22836 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1031 23:58:57.504070   22836 ssh_runner.go:195] Run: which crictl
	I1031 23:58:57.551368   22836 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1031 23:58:57.551411   22836 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1031 23:58:57.551447   22836 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1031 23:58:57.551515   22836 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1031 23:58:57.551576   22836 ssh_runner.go:195] Run: which crictl
	I1031 23:58:57.551453   22836 ssh_runner.go:195] Run: which crictl
	I1031 23:58:57.571954   22836 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1031 23:58:57.571997   22836 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1031 23:58:57.572030   22836 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1031 23:58:57.572046   22836 ssh_runner.go:195] Run: which crictl
	I1031 23:58:57.572067   22836 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1031 23:58:57.572068   22836 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1031 23:58:57.572095   22836 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1031 23:58:57.572108   22836 ssh_runner.go:195] Run: which crictl
	I1031 23:58:57.572137   22836 ssh_runner.go:195] Run: which crictl
	I1031 23:58:57.572175   22836 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1031 23:58:57.572194   22836 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1031 23:58:57.572205   22836 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1031 23:58:57.572242   22836 ssh_runner.go:195] Run: which crictl
	I1031 23:58:57.572278   22836 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1031 23:58:57.572281   22836 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1031 23:58:57.586939   22836 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1031 23:58:57.586968   22836 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1031 23:58:57.587088   22836 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1031 23:58:57.587090   22836 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1031 23:58:57.708570   22836 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1031 23:58:57.708639   22836 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1031 23:58:57.721570   22836 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1031 23:58:57.721643   22836 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1031 23:58:57.729966   22836 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1031 23:58:57.736968   22836 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1031 23:58:57.737031   22836 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1031 23:58:57.948688   22836 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 23:58:58.090562   22836 cache_images.go:92] LoadImages completed in 942.844875ms
	W1031 23:58:58.090671   22836 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7: no such file or directory
	I1031 23:58:58.090757   22836 ssh_runner.go:195] Run: crio config
	I1031 23:58:58.148587   22836 cni.go:84] Creating CNI manager for ""
	I1031 23:58:58.148612   22836 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 23:58:58.148631   22836 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 23:58:58.148657   22836 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.88 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-060181 NodeName:ingress-addon-legacy-060181 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.88"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.88 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1031 23:58:58.148937   22836 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.88
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-060181"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.88
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.88"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 23:58:58.149033   22836 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-060181 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.88
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-060181 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 23:58:58.149087   22836 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1031 23:58:58.158233   22836 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 23:58:58.158339   22836 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 23:58:58.166745   22836 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (435 bytes)
	I1031 23:58:58.183038   22836 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1031 23:58:58.199513   22836 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I1031 23:58:58.216267   22836 ssh_runner.go:195] Run: grep 192.168.39.88	control-plane.minikube.internal$ /etc/hosts
	I1031 23:58:58.219916   22836 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.88	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 23:58:58.232517   22836 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181 for IP: 192.168.39.88
	I1031 23:58:58.232548   22836 certs.go:190] acquiring lock for shared ca certs: {Name:mk6185b3938c4b51e80f57b7c81926c81b632b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 23:58:58.232678   22836 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key
	I1031 23:58:58.232713   22836 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key
	I1031 23:58:58.232750   22836 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.key
	I1031 23:58:58.232763   22836 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt with IP's: []
	I1031 23:58:58.548129   22836 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt ...
	I1031 23:58:58.548161   22836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt: {Name:mk29a9e4e0e86b59cea3bf8bf7cf44ed4a581d41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 23:58:58.548359   22836 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.key ...
	I1031 23:58:58.548375   22836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.key: {Name:mkf230b7da69a7abdae446366571768ea7881d33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 23:58:58.548483   22836 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/apiserver.key.e1aac5bc
	I1031 23:58:58.548499   22836 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/apiserver.crt.e1aac5bc with IP's: [192.168.39.88 10.96.0.1 127.0.0.1 10.0.0.1]
	I1031 23:58:58.718209   22836 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/apiserver.crt.e1aac5bc ...
	I1031 23:58:58.718240   22836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/apiserver.crt.e1aac5bc: {Name:mk70f4529c84aaa2c924bdc9edcf13128df0abbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 23:58:58.718459   22836 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/apiserver.key.e1aac5bc ...
	I1031 23:58:58.718475   22836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/apiserver.key.e1aac5bc: {Name:mk2fe51d5940d9250b031e2ea67de9f1f2c6fccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 23:58:58.718569   22836 certs.go:337] copying /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/apiserver.crt.e1aac5bc -> /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/apiserver.crt
	I1031 23:58:58.718653   22836 certs.go:341] copying /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/apiserver.key.e1aac5bc -> /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/apiserver.key
	I1031 23:58:58.718727   22836 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/proxy-client.key
	I1031 23:58:58.718747   22836 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/proxy-client.crt with IP's: []
	I1031 23:58:58.897905   22836 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/proxy-client.crt ...
	I1031 23:58:58.897935   22836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/proxy-client.crt: {Name:mk8647e22cd70aaa161f65649e5bff32980d896f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 23:58:58.898092   22836 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/proxy-client.key ...
	I1031 23:58:58.898107   22836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/proxy-client.key: {Name:mk6d7e3e078e85c47de6afc9beedf2808912a8c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 23:58:58.898192   22836 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1031 23:58:58.898216   22836 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1031 23:58:58.898241   22836 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1031 23:58:58.898262   22836 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1031 23:58:58.898278   22836 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1031 23:58:58.898295   22836 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1031 23:58:58.898313   22836 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1031 23:58:58.898330   22836 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1031 23:58:58.898391   22836 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem (1338 bytes)
	W1031 23:58:58.898443   22836 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504_empty.pem, impossibly tiny 0 bytes
	I1031 23:58:58.898459   22836 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem (1675 bytes)
	I1031 23:58:58.898504   22836 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem (1078 bytes)
	I1031 23:58:58.898542   22836 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem (1123 bytes)
	I1031 23:58:58.898582   22836 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem (1675 bytes)
	I1031 23:58:58.898648   22836 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem (1708 bytes)
	I1031 23:58:58.898700   22836 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> /usr/share/ca-certificates/145042.pem
	I1031 23:58:58.898735   22836 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1031 23:58:58.898754   22836 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem -> /usr/share/ca-certificates/14504.pem
	I1031 23:58:58.899309   22836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 23:58:58.921742   22836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1031 23:58:58.943645   22836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 23:58:58.965924   22836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1031 23:58:58.988047   22836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 23:58:59.010607   22836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 23:58:59.034411   22836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 23:58:59.058363   22836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1031 23:58:59.082720   22836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /usr/share/ca-certificates/145042.pem (1708 bytes)
	I1031 23:58:59.106642   22836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 23:58:59.130608   22836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem --> /usr/share/ca-certificates/14504.pem (1338 bytes)
	I1031 23:58:59.154617   22836 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1031 23:58:59.170884   22836 ssh_runner.go:195] Run: openssl version
	I1031 23:58:59.176685   22836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 23:58:59.186977   22836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 23:58:59.191806   22836 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1031 23:58:59.191858   22836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 23:58:59.197410   22836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 23:58:59.207166   22836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14504.pem && ln -fs /usr/share/ca-certificates/14504.pem /etc/ssl/certs/14504.pem"
	I1031 23:58:59.217574   22836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14504.pem
	I1031 23:58:59.222430   22836 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:54 /usr/share/ca-certificates/14504.pem
	I1031 23:58:59.222496   22836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14504.pem
	I1031 23:58:59.228293   22836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14504.pem /etc/ssl/certs/51391683.0"
	I1031 23:58:59.238156   22836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145042.pem && ln -fs /usr/share/ca-certificates/145042.pem /etc/ssl/certs/145042.pem"
	I1031 23:58:59.248551   22836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145042.pem
	I1031 23:58:59.253607   22836 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:54 /usr/share/ca-certificates/145042.pem
	I1031 23:58:59.253685   22836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145042.pem
	I1031 23:58:59.259672   22836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145042.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 23:58:59.270264   22836 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 23:58:59.275255   22836 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1031 23:58:59.275323   22836 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-060181 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-060181 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.88 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mou
ntMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 23:58:59.275414   22836 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1031 23:58:59.275489   22836 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 23:58:59.323677   22836 cri.go:89] found id: ""
	I1031 23:58:59.323769   22836 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1031 23:58:59.333424   22836 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 23:58:59.342772   22836 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 23:58:59.353100   22836 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 23:58:59.353157   22836 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1031 23:58:59.411304   22836 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1031 23:58:59.411368   22836 kubeadm.go:322] [preflight] Running pre-flight checks
	I1031 23:58:59.541635   22836 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1031 23:58:59.541815   22836 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1031 23:58:59.542094   22836 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1031 23:58:59.769072   22836 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1031 23:58:59.769987   22836 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1031 23:58:59.770080   22836 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1031 23:58:59.903274   22836 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1031 23:58:59.938318   22836 out.go:204]   - Generating certificates and keys ...
	I1031 23:58:59.938518   22836 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1031 23:58:59.938630   22836 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1031 23:59:00.065219   22836 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1031 23:59:00.192596   22836 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1031 23:59:00.287033   22836 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1031 23:59:00.478669   22836 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1031 23:59:00.733002   22836 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1031 23:59:00.733258   22836 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-060181 localhost] and IPs [192.168.39.88 127.0.0.1 ::1]
	I1031 23:59:00.871331   22836 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1031 23:59:00.871525   22836 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-060181 localhost] and IPs [192.168.39.88 127.0.0.1 ::1]
	I1031 23:59:01.147250   22836 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1031 23:59:01.247268   22836 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1031 23:59:01.350669   22836 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1031 23:59:01.350954   22836 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1031 23:59:01.503472   22836 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1031 23:59:01.605485   22836 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1031 23:59:01.793360   22836 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1031 23:59:02.049993   22836 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1031 23:59:02.051026   22836 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1031 23:59:02.052930   22836 out.go:204]   - Booting up control plane ...
	I1031 23:59:02.053028   22836 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1031 23:59:02.057374   22836 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1031 23:59:02.058890   22836 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1031 23:59:02.061041   22836 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1031 23:59:02.063176   22836 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1031 23:59:11.562584   22836 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.503453 seconds
	I1031 23:59:11.562758   22836 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1031 23:59:11.579044   22836 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1031 23:59:12.106023   22836 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1031 23:59:12.106196   22836 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-060181 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1031 23:59:12.615957   22836 kubeadm.go:322] [bootstrap-token] Using token: r2hc9g.iivxha75ohkiqb5f
	I1031 23:59:12.617414   22836 out.go:204]   - Configuring RBAC rules ...
	I1031 23:59:12.617531   22836 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1031 23:59:12.626320   22836 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1031 23:59:12.634661   22836 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1031 23:59:12.637516   22836 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1031 23:59:12.641681   22836 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1031 23:59:12.645791   22836 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1031 23:59:12.660434   22836 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1031 23:59:12.978840   22836 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1031 23:59:13.055688   22836 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1031 23:59:13.055719   22836 kubeadm.go:322] 
	I1031 23:59:13.055802   22836 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1031 23:59:13.055822   22836 kubeadm.go:322] 
	I1031 23:59:13.055904   22836 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1031 23:59:13.055914   22836 kubeadm.go:322] 
	I1031 23:59:13.055963   22836 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1031 23:59:13.056057   22836 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1031 23:59:13.056138   22836 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1031 23:59:13.056164   22836 kubeadm.go:322] 
	I1031 23:59:13.056248   22836 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1031 23:59:13.056370   22836 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1031 23:59:13.056473   22836 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1031 23:59:13.056489   22836 kubeadm.go:322] 
	I1031 23:59:13.056576   22836 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1031 23:59:13.056677   22836 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1031 23:59:13.056695   22836 kubeadm.go:322] 
	I1031 23:59:13.056768   22836 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token r2hc9g.iivxha75ohkiqb5f \
	I1031 23:59:13.056855   22836 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 \
	I1031 23:59:13.056877   22836 kubeadm.go:322]     --control-plane 
	I1031 23:59:13.056886   22836 kubeadm.go:322] 
	I1031 23:59:13.056953   22836 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1031 23:59:13.056960   22836 kubeadm.go:322] 
	I1031 23:59:13.057056   22836 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token r2hc9g.iivxha75ohkiqb5f \
	I1031 23:59:13.057200   22836 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 
	I1031 23:59:13.057693   22836 kubeadm.go:322] W1031 23:58:59.396390     958 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1031 23:59:13.057807   22836 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 23:59:13.057985   22836 kubeadm.go:322] W1031 23:59:02.044903     958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1031 23:59:13.058091   22836 kubeadm.go:322] W1031 23:59:02.046416     958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1031 23:59:13.058109   22836 cni.go:84] Creating CNI manager for ""
	I1031 23:59:13.058119   22836 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 23:59:13.060872   22836 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 23:59:13.062248   22836 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 23:59:13.077041   22836 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 23:59:13.094867   22836 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1031 23:59:13.094981   22836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9 minikube.k8s.io/name=ingress-addon-legacy-060181 minikube.k8s.io/updated_at=2023_10_31T23_59_13_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:59:13.094985   22836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:59:13.136921   22836 ops.go:34] apiserver oom_adj: -16
	I1031 23:59:13.276036   22836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:59:13.506372   22836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:59:14.138401   22836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:59:14.638452   22836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:59:15.138576   22836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:59:15.637798   22836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:59:16.138088   22836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:59:16.638427   22836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:59:17.138338   22836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:59:17.638628   22836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:59:18.138690   22836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:59:18.638658   22836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:59:19.138713   22836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:59:19.638144   22836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:59:20.138418   22836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:59:20.638110   22836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:59:21.138693   22836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:59:21.638096   22836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:59:22.138339   22836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:59:22.638053   22836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:59:23.137813   22836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:59:23.638520   22836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:59:24.137960   22836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:59:24.638768   22836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:59:25.137882   22836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:59:25.638485   22836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:59:26.138180   22836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:59:26.638527   22836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:59:27.138565   22836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:59:27.638693   22836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 23:59:28.069645   22836 kubeadm.go:1081] duration metric: took 14.974730134s to wait for elevateKubeSystemPrivileges.
	I1031 23:59:28.069683   22836 kubeadm.go:406] StartCluster complete in 28.794371365s
	I1031 23:59:28.069705   22836 settings.go:142] acquiring lock: {Name:mk7f269e64dfd8d176737f993e01f6e6badafbc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 23:59:28.069776   22836 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1031 23:59:28.070429   22836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/kubeconfig: {Name:mk08da65b6c71084e1cfafb19800038e8c8303e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 23:59:28.070640   22836 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 23:59:28.070741   22836 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1031 23:59:28.070814   22836 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-060181"
	I1031 23:59:28.070829   22836 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-060181"
	I1031 23:59:28.070869   22836 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-060181"
	I1031 23:59:28.070875   22836 config.go:182] Loaded profile config "ingress-addon-legacy-060181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1031 23:59:28.070838   22836 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-060181"
	I1031 23:59:28.070992   22836 host.go:66] Checking if "ingress-addon-legacy-060181" exists ...
	I1031 23:59:28.071342   22836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:59:28.071287   22836 kapi.go:59] client config for ingress-addon-legacy-060181: &rest.Config{Host:"https://192.168.39.88:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt", KeyFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.key", CAFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1031 23:59:28.071375   22836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:59:28.071380   22836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:59:28.071412   22836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:59:28.072044   22836 cert_rotation.go:137] Starting client certificate rotation controller
	I1031 23:59:28.087706   22836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35501
	I1031 23:59:28.087725   22836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34885
	I1031 23:59:28.088195   22836 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:59:28.088195   22836 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:59:28.089042   22836 main.go:141] libmachine: Using API Version  1
	I1031 23:59:28.089072   22836 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:59:28.089265   22836 main.go:141] libmachine: Using API Version  1
	I1031 23:59:28.089296   22836 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:59:28.089559   22836 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:59:28.089812   22836 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:59:28.090087   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetState
	I1031 23:59:28.090322   22836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:59:28.090366   22836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:59:28.093199   22836 kapi.go:59] client config for ingress-addon-legacy-060181: &rest.Config{Host:"https://192.168.39.88:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt", KeyFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.key", CAFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1031 23:59:28.093749   22836 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-060181"
	I1031 23:59:28.093810   22836 host.go:66] Checking if "ingress-addon-legacy-060181" exists ...
	I1031 23:59:28.094288   22836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:59:28.094326   22836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:59:28.106499   22836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35529
	I1031 23:59:28.107015   22836 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:59:28.107585   22836 main.go:141] libmachine: Using API Version  1
	I1031 23:59:28.107610   22836 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:59:28.108038   22836 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:59:28.108259   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetState
	I1031 23:59:28.109907   22836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38891
	I1031 23:59:28.110371   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .DriverName
	I1031 23:59:28.110385   22836 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:59:28.113015   22836 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 23:59:28.110917   22836 main.go:141] libmachine: Using API Version  1
	I1031 23:59:28.114853   22836 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:59:28.114985   22836 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 23:59:28.115011   22836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1031 23:59:28.115030   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHHostname
	I1031 23:59:28.115274   22836 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:59:28.115860   22836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:59:28.115895   22836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:59:28.118811   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:59:28.119240   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:db:50", ip: ""} in network mk-ingress-addon-legacy-060181: {Iface:virbr1 ExpiryTime:2023-11-01 00:58:39 +0000 UTC Type:0 Mac:52:54:00:85:db:50 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:ingress-addon-legacy-060181 Clientid:01:52:54:00:85:db:50}
	I1031 23:59:28.119274   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined IP address 192.168.39.88 and MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:59:28.119472   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHPort
	I1031 23:59:28.119694   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHKeyPath
	I1031 23:59:28.119891   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHUsername
	I1031 23:59:28.120223   22836 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/ingress-addon-legacy-060181/id_rsa Username:docker}
	I1031 23:59:28.131679   22836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44587
	I1031 23:59:28.132216   22836 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:59:28.132751   22836 main.go:141] libmachine: Using API Version  1
	I1031 23:59:28.132778   22836 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:59:28.133257   22836 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:59:28.133584   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetState
	I1031 23:59:28.135989   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .DriverName
	I1031 23:59:28.136291   22836 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1031 23:59:28.136311   22836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1031 23:59:28.136331   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHHostname
	I1031 23:59:28.140015   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:59:28.140751   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:db:50", ip: ""} in network mk-ingress-addon-legacy-060181: {Iface:virbr1 ExpiryTime:2023-11-01 00:58:39 +0000 UTC Type:0 Mac:52:54:00:85:db:50 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:ingress-addon-legacy-060181 Clientid:01:52:54:00:85:db:50}
	I1031 23:59:28.140804   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | domain ingress-addon-legacy-060181 has defined IP address 192.168.39.88 and MAC address 52:54:00:85:db:50 in network mk-ingress-addon-legacy-060181
	I1031 23:59:28.141189   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHPort
	I1031 23:59:28.141438   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHKeyPath
	I1031 23:59:28.141714   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .GetSSHUsername
	I1031 23:59:28.141925   22836 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/ingress-addon-legacy-060181/id_rsa Username:docker}
	I1031 23:59:28.252881   22836 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-060181" context rescaled to 1 replicas
	I1031 23:59:28.252918   22836 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.88 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1031 23:59:28.255649   22836 out.go:177] * Verifying Kubernetes components...
	I1031 23:59:28.257738   22836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 23:59:28.298874   22836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1031 23:59:28.335702   22836 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1031 23:59:28.336317   22836 kapi.go:59] client config for ingress-addon-legacy-060181: &rest.Config{Host:"https://192.168.39.88:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt", KeyFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.key", CAFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1031 23:59:28.336622   22836 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-060181" to be "Ready" ...
	I1031 23:59:28.368481   22836 node_ready.go:49] node "ingress-addon-legacy-060181" has status "Ready":"True"
	I1031 23:59:28.368512   22836 node_ready.go:38] duration metric: took 31.868788ms waiting for node "ingress-addon-legacy-060181" to be "Ready" ...
	I1031 23:59:28.368525   22836 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 23:59:28.396123   22836 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-nlkg9" in "kube-system" namespace to be "Ready" ...
	I1031 23:59:28.416601   22836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 23:59:29.224045   22836 main.go:141] libmachine: Making call to close driver server
	I1031 23:59:29.224066   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .Close
	I1031 23:59:29.224388   22836 main.go:141] libmachine: Successfully made call to close driver server
	I1031 23:59:29.224407   22836 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 23:59:29.224417   22836 main.go:141] libmachine: Making call to close driver server
	I1031 23:59:29.224426   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .Close
	I1031 23:59:29.224623   22836 main.go:141] libmachine: Successfully made call to close driver server
	I1031 23:59:29.224643   22836 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 23:59:29.224673   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | Closing plugin on server side
	I1031 23:59:29.254388   22836 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1031 23:59:29.255799   22836 main.go:141] libmachine: Making call to close driver server
	I1031 23:59:29.255823   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .Close
	I1031 23:59:29.256122   22836 main.go:141] libmachine: Successfully made call to close driver server
	I1031 23:59:29.256139   22836 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 23:59:29.256161   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | Closing plugin on server side
	I1031 23:59:29.301089   22836 main.go:141] libmachine: Making call to close driver server
	I1031 23:59:29.301118   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .Close
	I1031 23:59:29.301382   22836 main.go:141] libmachine: Successfully made call to close driver server
	I1031 23:59:29.301397   22836 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 23:59:29.301408   22836 main.go:141] libmachine: Making call to close driver server
	I1031 23:59:29.301418   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) Calling .Close
	I1031 23:59:29.303082   22836 main.go:141] libmachine: Successfully made call to close driver server
	I1031 23:59:29.303105   22836 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 23:59:29.303083   22836 main.go:141] libmachine: (ingress-addon-legacy-060181) DBG | Closing plugin on server side
	I1031 23:59:29.304979   22836 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1031 23:59:29.306842   22836 addons.go:502] enable addons completed in 1.236097313s: enabled=[default-storageclass storage-provisioner]
	I1031 23:59:30.527305   22836 pod_ready.go:102] pod "coredns-66bff467f8-nlkg9" in "kube-system" namespace has status "Ready":"False"
	I1031 23:59:32.529159   22836 pod_ready.go:102] pod "coredns-66bff467f8-nlkg9" in "kube-system" namespace has status "Ready":"False"
	I1031 23:59:35.027681   22836 pod_ready.go:102] pod "coredns-66bff467f8-nlkg9" in "kube-system" namespace has status "Ready":"False"
	I1031 23:59:37.028212   22836 pod_ready.go:102] pod "coredns-66bff467f8-nlkg9" in "kube-system" namespace has status "Ready":"False"
	I1031 23:59:39.028237   22836 pod_ready.go:102] pod "coredns-66bff467f8-nlkg9" in "kube-system" namespace has status "Ready":"False"
	I1031 23:59:41.528524   22836 pod_ready.go:102] pod "coredns-66bff467f8-nlkg9" in "kube-system" namespace has status "Ready":"False"
	I1031 23:59:43.531370   22836 pod_ready.go:102] pod "coredns-66bff467f8-nlkg9" in "kube-system" namespace has status "Ready":"False"
	I1031 23:59:46.028957   22836 pod_ready.go:102] pod "coredns-66bff467f8-nlkg9" in "kube-system" namespace has status "Ready":"False"
	I1031 23:59:48.527498   22836 pod_ready.go:102] pod "coredns-66bff467f8-nlkg9" in "kube-system" namespace has status "Ready":"False"
	I1031 23:59:50.528591   22836 pod_ready.go:102] pod "coredns-66bff467f8-nlkg9" in "kube-system" namespace has status "Ready":"False"
	I1031 23:59:53.028245   22836 pod_ready.go:102] pod "coredns-66bff467f8-nlkg9" in "kube-system" namespace has status "Ready":"False"
	I1031 23:59:55.028642   22836 pod_ready.go:102] pod "coredns-66bff467f8-nlkg9" in "kube-system" namespace has status "Ready":"False"
	I1031 23:59:57.028784   22836 pod_ready.go:102] pod "coredns-66bff467f8-nlkg9" in "kube-system" namespace has status "Ready":"False"
	I1031 23:59:59.527606   22836 pod_ready.go:102] pod "coredns-66bff467f8-nlkg9" in "kube-system" namespace has status "Ready":"False"
	I1101 00:00:01.528176   22836 pod_ready.go:102] pod "coredns-66bff467f8-nlkg9" in "kube-system" namespace has status "Ready":"False"
	I1101 00:00:04.027750   22836 pod_ready.go:102] pod "coredns-66bff467f8-nlkg9" in "kube-system" namespace has status "Ready":"False"
	I1101 00:00:06.027924   22836 pod_ready.go:102] pod "coredns-66bff467f8-nlkg9" in "kube-system" namespace has status "Ready":"False"
	I1101 00:00:08.027741   22836 pod_ready.go:92] pod "coredns-66bff467f8-nlkg9" in "kube-system" namespace has status "Ready":"True"
	I1101 00:00:08.027761   22836 pod_ready.go:81] duration metric: took 39.631609425s waiting for pod "coredns-66bff467f8-nlkg9" in "kube-system" namespace to be "Ready" ...
	I1101 00:00:08.027769   22836 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-zv7tl" in "kube-system" namespace to be "Ready" ...
	I1101 00:00:08.029964   22836 pod_ready.go:97] error getting pod "coredns-66bff467f8-zv7tl" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-zv7tl" not found
	I1101 00:00:08.029981   22836 pod_ready.go:81] duration metric: took 2.206904ms waiting for pod "coredns-66bff467f8-zv7tl" in "kube-system" namespace to be "Ready" ...
	E1101 00:00:08.029990   22836 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-66bff467f8-zv7tl" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-zv7tl" not found
	I1101 00:00:08.029997   22836 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-060181" in "kube-system" namespace to be "Ready" ...
	I1101 00:00:08.034727   22836 pod_ready.go:92] pod "etcd-ingress-addon-legacy-060181" in "kube-system" namespace has status "Ready":"True"
	I1101 00:00:08.034748   22836 pod_ready.go:81] duration metric: took 4.743175ms waiting for pod "etcd-ingress-addon-legacy-060181" in "kube-system" namespace to be "Ready" ...
	I1101 00:00:08.034759   22836 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-060181" in "kube-system" namespace to be "Ready" ...
	I1101 00:00:08.039968   22836 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-060181" in "kube-system" namespace has status "Ready":"True"
	I1101 00:00:08.039988   22836 pod_ready.go:81] duration metric: took 5.221234ms waiting for pod "kube-apiserver-ingress-addon-legacy-060181" in "kube-system" namespace to be "Ready" ...
	I1101 00:00:08.039999   22836 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-060181" in "kube-system" namespace to be "Ready" ...
	I1101 00:00:08.044885   22836 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-060181" in "kube-system" namespace has status "Ready":"True"
	I1101 00:00:08.044907   22836 pod_ready.go:81] duration metric: took 4.900072ms waiting for pod "kube-controller-manager-ingress-addon-legacy-060181" in "kube-system" namespace to be "Ready" ...
	I1101 00:00:08.044918   22836 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v75v6" in "kube-system" namespace to be "Ready" ...
	I1101 00:00:08.222537   22836 request.go:629] Waited for 175.144839ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.88:8443/api/v1/nodes/ingress-addon-legacy-060181
	I1101 00:00:08.225412   22836 pod_ready.go:92] pod "kube-proxy-v75v6" in "kube-system" namespace has status "Ready":"True"
	I1101 00:00:08.225436   22836 pod_ready.go:81] duration metric: took 180.509944ms waiting for pod "kube-proxy-v75v6" in "kube-system" namespace to be "Ready" ...
	I1101 00:00:08.225447   22836 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-060181" in "kube-system" namespace to be "Ready" ...
	I1101 00:00:08.421894   22836 request.go:629] Waited for 196.35288ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.88:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-060181
	I1101 00:00:08.622461   22836 request.go:629] Waited for 197.559644ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.88:8443/api/v1/nodes/ingress-addon-legacy-060181
	I1101 00:00:08.625897   22836 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-060181" in "kube-system" namespace has status "Ready":"True"
	I1101 00:00:08.625920   22836 pod_ready.go:81] duration metric: took 400.46529ms waiting for pod "kube-scheduler-ingress-addon-legacy-060181" in "kube-system" namespace to be "Ready" ...
	I1101 00:00:08.625930   22836 pod_ready.go:38] duration metric: took 40.25738927s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 00:00:08.625948   22836 api_server.go:52] waiting for apiserver process to appear ...
	I1101 00:00:08.626009   22836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:00:08.640508   22836 api_server.go:72] duration metric: took 40.38756407s to wait for apiserver process to appear ...
	I1101 00:00:08.640533   22836 api_server.go:88] waiting for apiserver healthz status ...
	I1101 00:00:08.640550   22836 api_server.go:253] Checking apiserver healthz at https://192.168.39.88:8443/healthz ...
	I1101 00:00:08.646463   22836 api_server.go:279] https://192.168.39.88:8443/healthz returned 200:
	ok
	I1101 00:00:08.647491   22836 api_server.go:141] control plane version: v1.18.20
	I1101 00:00:08.647512   22836 api_server.go:131] duration metric: took 6.971529ms to wait for apiserver health ...
	I1101 00:00:08.647523   22836 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 00:00:08.821985   22836 request.go:629] Waited for 174.39092ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.88:8443/api/v1/namespaces/kube-system/pods
	I1101 00:00:08.829021   22836 system_pods.go:59] 7 kube-system pods found
	I1101 00:00:08.829046   22836 system_pods.go:61] "coredns-66bff467f8-nlkg9" [c6082e46-3d42-4436-8b32-37fe9b44ab3b] Running
	I1101 00:00:08.829051   22836 system_pods.go:61] "etcd-ingress-addon-legacy-060181" [2e8899f3-2b59-42fe-b824-cc1bf857d790] Running
	I1101 00:00:08.829058   22836 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-060181" [e8020e21-ba72-41ab-a9ea-52ae10d2c958] Running
	I1101 00:00:08.829063   22836 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-060181" [083ee7a8-d636-48cf-b32d-4b3329257be2] Running
	I1101 00:00:08.829067   22836 system_pods.go:61] "kube-proxy-v75v6" [93d76cb2-f7ca-4baf-94ca-37cbcf15a697] Running
	I1101 00:00:08.829071   22836 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-060181" [9c67c977-6f3a-4eeb-a555-9808920008cf] Running
	I1101 00:00:08.829075   22836 system_pods.go:61] "storage-provisioner" [31467c5e-3e2b-4c76-b7b3-e5f5d19acce5] Running
	I1101 00:00:08.829081   22836 system_pods.go:74] duration metric: took 181.543237ms to wait for pod list to return data ...
	I1101 00:00:08.829091   22836 default_sa.go:34] waiting for default service account to be created ...
	I1101 00:00:09.022519   22836 request.go:629] Waited for 193.360536ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.88:8443/api/v1/namespaces/default/serviceaccounts
	I1101 00:00:09.026024   22836 default_sa.go:45] found service account: "default"
	I1101 00:00:09.026053   22836 default_sa.go:55] duration metric: took 196.956288ms for default service account to be created ...
	I1101 00:00:09.026064   22836 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 00:00:09.221510   22836 request.go:629] Waited for 195.377728ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.88:8443/api/v1/namespaces/kube-system/pods
	I1101 00:00:09.227146   22836 system_pods.go:86] 7 kube-system pods found
	I1101 00:00:09.227175   22836 system_pods.go:89] "coredns-66bff467f8-nlkg9" [c6082e46-3d42-4436-8b32-37fe9b44ab3b] Running
	I1101 00:00:09.227183   22836 system_pods.go:89] "etcd-ingress-addon-legacy-060181" [2e8899f3-2b59-42fe-b824-cc1bf857d790] Running
	I1101 00:00:09.227189   22836 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-060181" [e8020e21-ba72-41ab-a9ea-52ae10d2c958] Running
	I1101 00:00:09.227196   22836 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-060181" [083ee7a8-d636-48cf-b32d-4b3329257be2] Running
	I1101 00:00:09.227202   22836 system_pods.go:89] "kube-proxy-v75v6" [93d76cb2-f7ca-4baf-94ca-37cbcf15a697] Running
	I1101 00:00:09.227210   22836 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-060181" [9c67c977-6f3a-4eeb-a555-9808920008cf] Running
	I1101 00:00:09.227219   22836 system_pods.go:89] "storage-provisioner" [31467c5e-3e2b-4c76-b7b3-e5f5d19acce5] Running
	I1101 00:00:09.227235   22836 system_pods.go:126] duration metric: took 201.164394ms to wait for k8s-apps to be running ...
	I1101 00:00:09.227251   22836 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 00:00:09.227307   22836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 00:00:09.241402   22836 system_svc.go:56] duration metric: took 14.127064ms WaitForService to wait for kubelet.
	I1101 00:00:09.241429   22836 kubeadm.go:581] duration metric: took 40.988490391s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1101 00:00:09.241447   22836 node_conditions.go:102] verifying NodePressure condition ...
	I1101 00:00:09.421951   22836 request.go:629] Waited for 180.420321ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.88:8443/api/v1/nodes
	I1101 00:00:09.425534   22836 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 00:00:09.425566   22836 node_conditions.go:123] node cpu capacity is 2
	I1101 00:00:09.425580   22836 node_conditions.go:105] duration metric: took 184.12737ms to run NodePressure ...
	I1101 00:00:09.425594   22836 start.go:228] waiting for startup goroutines ...
	I1101 00:00:09.425604   22836 start.go:233] waiting for cluster config update ...
	I1101 00:00:09.425621   22836 start.go:242] writing updated cluster config ...
	I1101 00:00:09.425879   22836 ssh_runner.go:195] Run: rm -f paused
	I1101 00:00:09.471961   22836 start.go:600] kubectl: 1.28.3, cluster: 1.18.20 (minor skew: 10)
	I1101 00:00:09.474070   22836 out.go:177] 
	W1101 00:00:09.475473   22836 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.18.20.
	I1101 00:00:09.476837   22836 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1101 00:00:09.478351   22836 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-060181" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-10-31 23:58:35 UTC, ends at Wed 2023-11-01 00:03:31 UTC. --
	Nov 01 00:03:31 ingress-addon-legacy-060181 crio[719]: time="2023-11-01 00:03:31.738169261Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a1b54f1f-f185-4aac-9cb2-d0f314c3f0c9 name=/runtime.v1.RuntimeService/Version
	Nov 01 00:03:31 ingress-addon-legacy-060181 crio[719]: time="2023-11-01 00:03:31.739327763Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=86a08952-71e5-4e2c-87a3-dfd271bbfa33 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 00:03:31 ingress-addon-legacy-060181 crio[719]: time="2023-11-01 00:03:31.739891953Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698797011739875883,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202349,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=86a08952-71e5-4e2c-87a3-dfd271bbfa33 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 00:03:31 ingress-addon-legacy-060181 crio[719]: time="2023-11-01 00:03:31.740692477Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=42b97008-c498-4bd2-8897-47874179900f name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:03:31 ingress-addon-legacy-060181 crio[719]: time="2023-11-01 00:03:31.740744881Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=42b97008-c498-4bd2-8897-47874179900f name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:03:31 ingress-addon-legacy-060181 crio[719]: time="2023-11-01 00:03:31.740974613Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:abc9330d193891845a53bf6ae7ce5a4a4b7d5ccbc2909852822148856984f13e,PodSandboxId:38697f0eaabcc598fb5e49601dc720d0281a252b96d81f757043aae94805c28f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9f3072b59865e9203c931b3ba3e8ae52e1ca3002cff46ed204bda9d37850420d,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9f3072b59865e9203c931b3ba3e8ae52e1ca3002cff46ed204bda9d37850420d,State:CONTAINER_RUNNING,CreatedAt:1698796999362399236,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-wgwnb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7b855959-222a-40dd-9094-d85fd243d217,},Annotations:map[string]string{io.kubernetes.container.hash: 5eaafb24,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:179abe355ba52a4baa66f6453e3bbcbeb43caa8625829ebd639aaab406727b67,PodSandboxId:28ec1206ebf2fdaf418383dafb1c7e03af2c3b050bcd2ba77ae121f3a3cbbbd5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1698796857103371044,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 04f9bdf4-9b60-4627-ad0b-34a6e7e61432,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 8106c9a0,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491fba860904c77ac7fac3b17e2ad622810277db9baf512e232c92d4b5ab562b,PodSandboxId:15873d693cbbe2bafe959360e0db8a11cb2b9adb3c25c78306a5eb456d5cb332,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1698796833619139411,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-cvwzp,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ea65cf5b-06f8-46dd-a61a-7e0784a1963e,},Annotations:map[string]string{io.kubernetes.container.hash: 60e8cc60,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:92a92db3c0c6f9f66fdd1558c9df9b18d8469603598ec8821efce443b46f35d4,PodSandboxId:f274181a1aa9cd63598d869e0cc6700294b7ad8f5dceab9bf822cd835f0446c5,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:a013daf8730dbb3908d66f67c57053f09055fddb28fde0b5808cb24c27900dc8
,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1698796833498957304,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-jn7tf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ae9c9cb2-84bd-4f6a-804a-b1c1c702a525,},Annotations:map[string]string{io.kubernetes.container.hash: e798d8ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e084274fba1e7a89b1ea346a59ecba60fd27578dd706216a4dc59f5cbeb245a1,PodSandboxId:13846515861a60a8ecedde205be0de63baae35f603c4f4ee78b7799fb42582f6,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2
dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1698796820677696628,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-tvwbt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8c0e90d1-d687-4153-86b7-0740686e4dd6,},Annotations:map[string]string{io.kubernetes.container.hash: 171310ca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0caf7ff33b7dd32c6e5d4994f1b5bf65d9aa26bd3edb43838317ee161c4c1fcc,PodSandboxId:eb45db8522bd462de5a478b88e4bc36ac9a635488b3a01d450f94023979a32b4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8
872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698796770047859911,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31467c5e-3e2b-4c76-b7b3-e5f5d19acce5,},Annotations:map[string]string{io.kubernetes.container.hash: 3f7d16a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d5256c54cf44e9d17fb2a2c3705117cd938ee6675e46c6e04035d0711e573d5,PodSandboxId:39f3c6e17f7dd947be2285b0b0e30af34e80d2bb3a3948158c9b6e643ba6fab6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0
da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1698796769503944996,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v75v6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d76cb2-f7ca-4baf-94ca-37cbcf15a697,},Annotations:map[string]string{io.kubernetes.container.hash: 3f2106a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3590e6390d653680c69c71636dbd8705bb4deef34ab96eedb2e51ab8d1973888,PodSandboxId:f8b1a04d8f83d4b86612e35d69770f8ce1735b3e920f22e34128515a2de22457,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map
[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1698796768961169887,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-nlkg9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6082e46-3d42-4436-8b32-37fe9b44ab3b,},Annotations:map[string]string{io.kubernetes.container.hash: dd8ce204,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d729dee6c0e676228c5be374b11e667c336f9ce481b1bb097d0b991df58e4ce,PodSandboxId:44ea355bddac284052c534369f3a15faa763
5495086919fc86c4c12bd00f2d95,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1698796745853928411,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-060181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4aa9c369f82e5f2089975df28225c12,},Annotations:map[string]string{io.kubernetes.container.hash: 296ed226,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a8790c1933ac4cdef16ec3897307168184dfc01dbd23770f2d2d0dc2fe7dd23,PodSandboxId:13b8f60be197609723854b8cc7d503c9689bb79414a96ad2352648ba5c93fefa,Metadata:&Contain
erMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1698796744576063973,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-060181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:561b5a547e3498f8a515c533e618797d7c8fcd8bee2d7a19d68b52c7ffc3cf4e,PodSandboxId:0f7afccd05fae0ca5356c533ddda61b5d3c973abf39c7b9f8c4017b4be84b206,Metadata:&ContainerMeta
data{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1698796744490798593,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-060181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7741478932519f8828c53eaf355ef5fd,},Annotations:map[string]string{io.kubernetes.container.hash: 82b903f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7f4ba6b038325886d3d32ad6703d5cbb61a377cdfec995bcc034eefd630e86b,PodSandboxId:c822d65aa38bf24293b269816d7a48c0917b6f1e8e19921ef6c1bc697bf4d527,Metadata:&ContainerMetadata{N
ame:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1698796744228930732,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-060181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=42b97008-c498-4bd2-8897-47874179900f name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:03:31 ingress-addon-legacy-060181 crio[719]: time="2023-11-01 00:03:31.777312714Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=2599ae0e-361a-4148-9fb2-e5045976338d name=/runtime.v1.RuntimeService/Version
	Nov 01 00:03:31 ingress-addon-legacy-060181 crio[719]: time="2023-11-01 00:03:31.777381087Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=2599ae0e-361a-4148-9fb2-e5045976338d name=/runtime.v1.RuntimeService/Version
	Nov 01 00:03:31 ingress-addon-legacy-060181 crio[719]: time="2023-11-01 00:03:31.778702322Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d3807f8c-6b61-4251-8be0-96e407537810 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 00:03:31 ingress-addon-legacy-060181 crio[719]: time="2023-11-01 00:03:31.779164561Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698797011779152502,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202349,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=d3807f8c-6b61-4251-8be0-96e407537810 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 00:03:31 ingress-addon-legacy-060181 crio[719]: time="2023-11-01 00:03:31.779746982Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9ca20c2e-c9b7-4196-bff6-0c5e6d8489c6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:03:31 ingress-addon-legacy-060181 crio[719]: time="2023-11-01 00:03:31.779796834Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9ca20c2e-c9b7-4196-bff6-0c5e6d8489c6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:03:31 ingress-addon-legacy-060181 crio[719]: time="2023-11-01 00:03:31.780131602Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:abc9330d193891845a53bf6ae7ce5a4a4b7d5ccbc2909852822148856984f13e,PodSandboxId:38697f0eaabcc598fb5e49601dc720d0281a252b96d81f757043aae94805c28f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9f3072b59865e9203c931b3ba3e8ae52e1ca3002cff46ed204bda9d37850420d,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9f3072b59865e9203c931b3ba3e8ae52e1ca3002cff46ed204bda9d37850420d,State:CONTAINER_RUNNING,CreatedAt:1698796999362399236,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-wgwnb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7b855959-222a-40dd-9094-d85fd243d217,},Annotations:map[string]string{io.kubernetes.container.hash: 5eaafb24,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:179abe355ba52a4baa66f6453e3bbcbeb43caa8625829ebd639aaab406727b67,PodSandboxId:28ec1206ebf2fdaf418383dafb1c7e03af2c3b050bcd2ba77ae121f3a3cbbbd5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1698796857103371044,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 04f9bdf4-9b60-4627-ad0b-34a6e7e61432,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 8106c9a0,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491fba860904c77ac7fac3b17e2ad622810277db9baf512e232c92d4b5ab562b,PodSandboxId:15873d693cbbe2bafe959360e0db8a11cb2b9adb3c25c78306a5eb456d5cb332,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1698796833619139411,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-cvwzp,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ea65cf5b-06f8-46dd-a61a-7e0784a1963e,},Annotations:map[string]string{io.kubernetes.container.hash: 60e8cc60,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:92a92db3c0c6f9f66fdd1558c9df9b18d8469603598ec8821efce443b46f35d4,PodSandboxId:f274181a1aa9cd63598d869e0cc6700294b7ad8f5dceab9bf822cd835f0446c5,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:a013daf8730dbb3908d66f67c57053f09055fddb28fde0b5808cb24c27900dc8
,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1698796833498957304,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-jn7tf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ae9c9cb2-84bd-4f6a-804a-b1c1c702a525,},Annotations:map[string]string{io.kubernetes.container.hash: e798d8ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e084274fba1e7a89b1ea346a59ecba60fd27578dd706216a4dc59f5cbeb245a1,PodSandboxId:13846515861a60a8ecedde205be0de63baae35f603c4f4ee78b7799fb42582f6,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2
dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1698796820677696628,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-tvwbt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8c0e90d1-d687-4153-86b7-0740686e4dd6,},Annotations:map[string]string{io.kubernetes.container.hash: 171310ca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0caf7ff33b7dd32c6e5d4994f1b5bf65d9aa26bd3edb43838317ee161c4c1fcc,PodSandboxId:eb45db8522bd462de5a478b88e4bc36ac9a635488b3a01d450f94023979a32b4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8
872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698796770047859911,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31467c5e-3e2b-4c76-b7b3-e5f5d19acce5,},Annotations:map[string]string{io.kubernetes.container.hash: 3f7d16a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d5256c54cf44e9d17fb2a2c3705117cd938ee6675e46c6e04035d0711e573d5,PodSandboxId:39f3c6e17f7dd947be2285b0b0e30af34e80d2bb3a3948158c9b6e643ba6fab6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0
da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1698796769503944996,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v75v6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d76cb2-f7ca-4baf-94ca-37cbcf15a697,},Annotations:map[string]string{io.kubernetes.container.hash: 3f2106a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3590e6390d653680c69c71636dbd8705bb4deef34ab96eedb2e51ab8d1973888,PodSandboxId:f8b1a04d8f83d4b86612e35d69770f8ce1735b3e920f22e34128515a2de22457,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map
[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1698796768961169887,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-nlkg9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6082e46-3d42-4436-8b32-37fe9b44ab3b,},Annotations:map[string]string{io.kubernetes.container.hash: dd8ce204,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d729dee6c0e676228c5be374b11e667c336f9ce481b1bb097d0b991df58e4ce,PodSandboxId:44ea355bddac284052c534369f3a15faa763
5495086919fc86c4c12bd00f2d95,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1698796745853928411,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-060181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4aa9c369f82e5f2089975df28225c12,},Annotations:map[string]string{io.kubernetes.container.hash: 296ed226,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a8790c1933ac4cdef16ec3897307168184dfc01dbd23770f2d2d0dc2fe7dd23,PodSandboxId:13b8f60be197609723854b8cc7d503c9689bb79414a96ad2352648ba5c93fefa,Metadata:&Contain
erMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1698796744576063973,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-060181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:561b5a547e3498f8a515c533e618797d7c8fcd8bee2d7a19d68b52c7ffc3cf4e,PodSandboxId:0f7afccd05fae0ca5356c533ddda61b5d3c973abf39c7b9f8c4017b4be84b206,Metadata:&ContainerMeta
data{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1698796744490798593,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-060181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7741478932519f8828c53eaf355ef5fd,},Annotations:map[string]string{io.kubernetes.container.hash: 82b903f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7f4ba6b038325886d3d32ad6703d5cbb61a377cdfec995bcc034eefd630e86b,PodSandboxId:c822d65aa38bf24293b269816d7a48c0917b6f1e8e19921ef6c1bc697bf4d527,Metadata:&ContainerMetadata{N
ame:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1698796744228930732,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-060181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9ca20c2e-c9b7-4196-bff6-0c5e6d8489c6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:03:31 ingress-addon-legacy-060181 crio[719]: time="2023-11-01 00:03:31.782169502Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=120c15d9-d120-4404-b14c-ae6082f09633 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Nov 01 00:03:31 ingress-addon-legacy-060181 crio[719]: time="2023-11-01 00:03:31.782511157Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:38697f0eaabcc598fb5e49601dc720d0281a252b96d81f757043aae94805c28f,Metadata:&PodSandboxMetadata{Name:hello-world-app-5f5d8b66bb-wgwnb,Uid:7b855959-222a-40dd-9094-d85fd243d217,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698796995907159274,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-wgwnb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7b855959-222a-40dd-9094-d85fd243d217,pod-template-hash: 5f5d8b66bb,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-01T00:03:15.563739810Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:28ec1206ebf2fdaf418383dafb1c7e03af2c3b050bcd2ba77ae121f3a3cbbbd5,Metadata:&PodSandboxMetadata{Name:nginx,Uid:04f9bdf4-9b60-4627-ad0b-34a6e7e61432,Namespace:defau
lt,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698796852135160873,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 04f9bdf4-9b60-4627-ad0b-34a6e7e61432,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-01T00:00:51.795312741Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4f9b2c42e3192524e9d6afce17959d0449b6ee88fdcaaa98907130b0de822d06,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:e52bda3d-abf4-416e-af23-e7290070a0b5,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1698796835422829403,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e52bda3d-abf4-416e-af23-e7290070a0b5,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configura
tion: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"protocol\":\"UDP\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\"}}\n,kubernetes.io/config.seen: 2023-11-01T00:00:35.076695006Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:15873d693cbbe2bafe959360e0db8a11cb2b9adb3c25c78306a5eb456d5cb332,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-7fcf777cb7-cvwzp,Uid:ea65cf5b-06f8-46dd-a61a
-7e0784a1963e,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1698796826249188314,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-cvwzp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ea65cf5b-06f8-46dd-a61a-7e0784a1963e,pod-template-hash: 7fcf777cb7,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-01T00:00:10.305233268Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:13846515861a60a8ecedde205be0de63baae35f603c4f4ee78b7799fb42582f6,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-tvwbt,Uid:8c0e90d1-d687-4153-86b7-0740686e4dd6,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1698796811613586875,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/in
stance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,controller-uid: 4426b24e-579e-4929-9128-c24cc97d11e8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-create-tvwbt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8c0e90d1-d687-4153-86b7-0740686e4dd6,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-01T00:00:10.364282442Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f274181a1aa9cd63598d869e0cc6700294b7ad8f5dceab9bf822cd835f0446c5,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-jn7tf,Uid:ae9c9cb2-84bd-4f6a-804a-b1c1c702a525,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1698796811405085939,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,controller-uid: 9f440724-afbe-4b88-8c26-fd9b2eff308f,io.kubernetes.container.name: POD,io.kuberne
tes.pod.name: ingress-nginx-admission-patch-jn7tf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ae9c9cb2-84bd-4f6a-804a-b1c1c702a525,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-01T00:00:10.462202678Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:eb45db8522bd462de5a478b88e4bc36ac9a635488b3a01d450f94023979a32b4,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:31467c5e-3e2b-4c76-b7b3-e5f5d19acce5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698796769647200265,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31467c5e-3e2b-4c76-b7b3-e5f5d19acce5,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annota
tions\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-10-31T23:59:29.299616462Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f8b1a04d8f83d4b86612e35d69770f8ce1735b3e920f22e34128515a2de22457,Metadata:&PodSandboxMetadata{Name:coredns-66bff467f8-nlkg9,Uid:c6082e46-3d42-4436-8b32-37fe9b44ab3b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698796768470828569,Labels:map[string]string{io.kubernetes.container.
name: POD,io.kubernetes.pod.name: coredns-66bff467f8-nlkg9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6082e46-3d42-4436-8b32-37fe9b44ab3b,k8s-app: kube-dns,pod-template-hash: 66bff467f8,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-31T23:59:28.118051481Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:39f3c6e17f7dd947be2285b0b0e30af34e80d2bb3a3948158c9b6e643ba6fab6,Metadata:&PodSandboxMetadata{Name:kube-proxy-v75v6,Uid:93d76cb2-f7ca-4baf-94ca-37cbcf15a697,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698796768071370907,Labels:map[string]string{controller-revision-hash: 5bdc57b48f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-v75v6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d76cb2-f7ca-4baf-94ca-37cbcf15a697,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-31T23:59:27.337785204Z,kubernetes.io/config.source: api,},Runtime
Handler:,},&PodSandbox{Id:13b8f60be197609723854b8cc7d503c9689bb79414a96ad2352648ba5c93fefa,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ingress-addon-legacy-060181,Uid:d12e497b0008e22acbcd5a9cf2dd48ac,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698796743825407806,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-060181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d12e497b0008e22acbcd5a9cf2dd48ac,kubernetes.io/config.seen: 2023-10-31T23:59:02.479859282Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0f7afccd05fae0ca5356c533ddda61b5d3c973abf39c7b9f8c4017b4be84b206,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ingress-addon-legacy-060181,Uid:7741478932519f8828c53eaf355ef5fd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698
796743819081880,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-060181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7741478932519f8828c53eaf355ef5fd,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.88:8443,kubernetes.io/config.hash: 7741478932519f8828c53eaf355ef5fd,kubernetes.io/config.seen: 2023-10-31T23:59:02.479847353Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c822d65aa38bf24293b269816d7a48c0917b6f1e8e19921ef6c1bc697bf4d527,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ingress-addon-legacy-060181,Uid:b395a1e17534e69e27827b1f8d737725,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698796743813172346,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-0
60181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b395a1e17534e69e27827b1f8d737725,kubernetes.io/config.seen: 2023-10-31T23:59:02.479857770Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:44ea355bddac284052c534369f3a15faa7635495086919fc86c4c12bd00f2d95,Metadata:&PodSandboxMetadata{Name:etcd-ingress-addon-legacy-060181,Uid:e4aa9c369f82e5f2089975df28225c12,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698796743749766477,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ingress-addon-legacy-060181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4aa9c369f82e5f2089975df28225c12,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.88:2379,kubernetes.io/config.hash: e4aa9c369f82e5f2089975df28225c12,kubernetes.
io/config.seen: 2023-10-31T23:59:02.479860550Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=120c15d9-d120-4404-b14c-ae6082f09633 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Nov 01 00:03:31 ingress-addon-legacy-060181 crio[719]: time="2023-11-01 00:03:31.783197284Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d532efbf-637f-4810-b482-93f5263511ab name=/runtime.v1alpha2.RuntimeService/ListContainers
	Nov 01 00:03:31 ingress-addon-legacy-060181 crio[719]: time="2023-11-01 00:03:31.783246494Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d532efbf-637f-4810-b482-93f5263511ab name=/runtime.v1alpha2.RuntimeService/ListContainers
	Nov 01 00:03:31 ingress-addon-legacy-060181 crio[719]: time="2023-11-01 00:03:31.783576204Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:abc9330d193891845a53bf6ae7ce5a4a4b7d5ccbc2909852822148856984f13e,PodSandboxId:38697f0eaabcc598fb5e49601dc720d0281a252b96d81f757043aae94805c28f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9f3072b59865e9203c931b3ba3e8ae52e1ca3002cff46ed204bda9d37850420d,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9f3072b59865e9203c931b3ba3e8ae52e1ca3002cff46ed204bda9d37850420d,State:CONTAINER_RUNNING,CreatedAt:1698796999362399236,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-wgwnb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7b855959-222a-40dd-9094-d85fd243d217,},Annotations:map[string]string{io.kubernetes.container.hash: 5eaafb24,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:179abe355ba52a4baa66f6453e3bbcbeb43caa8625829ebd639aaab406727b67,PodSandboxId:28ec1206ebf2fdaf418383dafb1c7e03af2c3b050bcd2ba77ae121f3a3cbbbd5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1698796857103371044,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 04f9bdf4-9b60-4627-ad0b-34a6e7e61432,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 8106c9a0,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491fba860904c77ac7fac3b17e2ad622810277db9baf512e232c92d4b5ab562b,PodSandboxId:15873d693cbbe2bafe959360e0db8a11cb2b9adb3c25c78306a5eb456d5cb332,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1698796833619139411,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-cvwzp,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ea65cf5b-06f8-46dd-a61a-7e0784a1963e,},Annotations:map[string]string{io.kubernetes.container.hash: 60e8cc60,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:92a92db3c0c6f9f66fdd1558c9df9b18d8469603598ec8821efce443b46f35d4,PodSandboxId:f274181a1aa9cd63598d869e0cc6700294b7ad8f5dceab9bf822cd835f0446c5,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:a013daf8730dbb3908d66f67c57053f09055fddb28fde0b5808cb24c27900dc8
,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1698796833498957304,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-jn7tf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ae9c9cb2-84bd-4f6a-804a-b1c1c702a525,},Annotations:map[string]string{io.kubernetes.container.hash: e798d8ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e084274fba1e7a89b1ea346a59ecba60fd27578dd706216a4dc59f5cbeb245a1,PodSandboxId:13846515861a60a8ecedde205be0de63baae35f603c4f4ee78b7799fb42582f6,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2
dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1698796820677696628,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-tvwbt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8c0e90d1-d687-4153-86b7-0740686e4dd6,},Annotations:map[string]string{io.kubernetes.container.hash: 171310ca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0caf7ff33b7dd32c6e5d4994f1b5bf65d9aa26bd3edb43838317ee161c4c1fcc,PodSandboxId:eb45db8522bd462de5a478b88e4bc36ac9a635488b3a01d450f94023979a32b4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8
872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698796770047859911,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31467c5e-3e2b-4c76-b7b3-e5f5d19acce5,},Annotations:map[string]string{io.kubernetes.container.hash: 3f7d16a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d5256c54cf44e9d17fb2a2c3705117cd938ee6675e46c6e04035d0711e573d5,PodSandboxId:39f3c6e17f7dd947be2285b0b0e30af34e80d2bb3a3948158c9b6e643ba6fab6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0
da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1698796769503944996,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v75v6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d76cb2-f7ca-4baf-94ca-37cbcf15a697,},Annotations:map[string]string{io.kubernetes.container.hash: 3f2106a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3590e6390d653680c69c71636dbd8705bb4deef34ab96eedb2e51ab8d1973888,PodSandboxId:f8b1a04d8f83d4b86612e35d69770f8ce1735b3e920f22e34128515a2de22457,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map
[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1698796768961169887,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-nlkg9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6082e46-3d42-4436-8b32-37fe9b44ab3b,},Annotations:map[string]string{io.kubernetes.container.hash: dd8ce204,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d729dee6c0e676228c5be374b11e667c336f9ce481b1bb097d0b991df58e4ce,PodSandboxId:44ea355bddac284052c534369f3a15faa763
5495086919fc86c4c12bd00f2d95,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1698796745853928411,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-060181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4aa9c369f82e5f2089975df28225c12,},Annotations:map[string]string{io.kubernetes.container.hash: 296ed226,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a8790c1933ac4cdef16ec3897307168184dfc01dbd23770f2d2d0dc2fe7dd23,PodSandboxId:13b8f60be197609723854b8cc7d503c9689bb79414a96ad2352648ba5c93fefa,Metadata:&Contain
erMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1698796744576063973,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-060181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:561b5a547e3498f8a515c533e618797d7c8fcd8bee2d7a19d68b52c7ffc3cf4e,PodSandboxId:0f7afccd05fae0ca5356c533ddda61b5d3c973abf39c7b9f8c4017b4be84b206,Metadata:&ContainerMeta
data{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1698796744490798593,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-060181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7741478932519f8828c53eaf355ef5fd,},Annotations:map[string]string{io.kubernetes.container.hash: 82b903f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7f4ba6b038325886d3d32ad6703d5cbb61a377cdfec995bcc034eefd630e86b,PodSandboxId:c822d65aa38bf24293b269816d7a48c0917b6f1e8e19921ef6c1bc697bf4d527,Metadata:&ContainerMetadata{N
ame:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1698796744228930732,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-060181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d532efbf-637f-4810-b482-93f5263511ab name=/runtime.v1alpha2.RuntimeService/ListContainers
	Nov 01 00:03:31 ingress-addon-legacy-060181 crio[719]: time="2023-11-01 00:03:31.819330246Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b55ae7c3-2dbb-43b4-8ab3-2ad5fa5b1c44 name=/runtime.v1.RuntimeService/Version
	Nov 01 00:03:31 ingress-addon-legacy-060181 crio[719]: time="2023-11-01 00:03:31.819490339Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b55ae7c3-2dbb-43b4-8ab3-2ad5fa5b1c44 name=/runtime.v1.RuntimeService/Version
	Nov 01 00:03:31 ingress-addon-legacy-060181 crio[719]: time="2023-11-01 00:03:31.820622530Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=40a30564-ef62-4d50-b9f9-8c501662dec0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 00:03:31 ingress-addon-legacy-060181 crio[719]: time="2023-11-01 00:03:31.821165314Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698797011821150774,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202349,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=40a30564-ef62-4d50-b9f9-8c501662dec0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 00:03:31 ingress-addon-legacy-060181 crio[719]: time="2023-11-01 00:03:31.821853879Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=49d2ff5f-8561-461f-be21-06f478a99770 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:03:31 ingress-addon-legacy-060181 crio[719]: time="2023-11-01 00:03:31.821930135Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=49d2ff5f-8561-461f-be21-06f478a99770 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:03:31 ingress-addon-legacy-060181 crio[719]: time="2023-11-01 00:03:31.825144097Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:abc9330d193891845a53bf6ae7ce5a4a4b7d5ccbc2909852822148856984f13e,PodSandboxId:38697f0eaabcc598fb5e49601dc720d0281a252b96d81f757043aae94805c28f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9f3072b59865e9203c931b3ba3e8ae52e1ca3002cff46ed204bda9d37850420d,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9f3072b59865e9203c931b3ba3e8ae52e1ca3002cff46ed204bda9d37850420d,State:CONTAINER_RUNNING,CreatedAt:1698796999362399236,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-wgwnb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7b855959-222a-40dd-9094-d85fd243d217,},Annotations:map[string]string{io.kubernetes.container.hash: 5eaafb24,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:179abe355ba52a4baa66f6453e3bbcbeb43caa8625829ebd639aaab406727b67,PodSandboxId:28ec1206ebf2fdaf418383dafb1c7e03af2c3b050bcd2ba77ae121f3a3cbbbd5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1698796857103371044,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 04f9bdf4-9b60-4627-ad0b-34a6e7e61432,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 8106c9a0,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491fba860904c77ac7fac3b17e2ad622810277db9baf512e232c92d4b5ab562b,PodSandboxId:15873d693cbbe2bafe959360e0db8a11cb2b9adb3c25c78306a5eb456d5cb332,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1698796833619139411,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-cvwzp,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ea65cf5b-06f8-46dd-a61a-7e0784a1963e,},Annotations:map[string]string{io.kubernetes.container.hash: 60e8cc60,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:92a92db3c0c6f9f66fdd1558c9df9b18d8469603598ec8821efce443b46f35d4,PodSandboxId:f274181a1aa9cd63598d869e0cc6700294b7ad8f5dceab9bf822cd835f0446c5,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:a013daf8730dbb3908d66f67c57053f09055fddb28fde0b5808cb24c27900dc8
,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1698796833498957304,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-jn7tf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ae9c9cb2-84bd-4f6a-804a-b1c1c702a525,},Annotations:map[string]string{io.kubernetes.container.hash: e798d8ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e084274fba1e7a89b1ea346a59ecba60fd27578dd706216a4dc59f5cbeb245a1,PodSandboxId:13846515861a60a8ecedde205be0de63baae35f603c4f4ee78b7799fb42582f6,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2
dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1698796820677696628,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-tvwbt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8c0e90d1-d687-4153-86b7-0740686e4dd6,},Annotations:map[string]string{io.kubernetes.container.hash: 171310ca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0caf7ff33b7dd32c6e5d4994f1b5bf65d9aa26bd3edb43838317ee161c4c1fcc,PodSandboxId:eb45db8522bd462de5a478b88e4bc36ac9a635488b3a01d450f94023979a32b4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8
872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698796770047859911,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31467c5e-3e2b-4c76-b7b3-e5f5d19acce5,},Annotations:map[string]string{io.kubernetes.container.hash: 3f7d16a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d5256c54cf44e9d17fb2a2c3705117cd938ee6675e46c6e04035d0711e573d5,PodSandboxId:39f3c6e17f7dd947be2285b0b0e30af34e80d2bb3a3948158c9b6e643ba6fab6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0
da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1698796769503944996,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v75v6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d76cb2-f7ca-4baf-94ca-37cbcf15a697,},Annotations:map[string]string{io.kubernetes.container.hash: 3f2106a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3590e6390d653680c69c71636dbd8705bb4deef34ab96eedb2e51ab8d1973888,PodSandboxId:f8b1a04d8f83d4b86612e35d69770f8ce1735b3e920f22e34128515a2de22457,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map
[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1698796768961169887,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-nlkg9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6082e46-3d42-4436-8b32-37fe9b44ab3b,},Annotations:map[string]string{io.kubernetes.container.hash: dd8ce204,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d729dee6c0e676228c5be374b11e667c336f9ce481b1bb097d0b991df58e4ce,PodSandboxId:44ea355bddac284052c534369f3a15faa763
5495086919fc86c4c12bd00f2d95,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1698796745853928411,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-060181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4aa9c369f82e5f2089975df28225c12,},Annotations:map[string]string{io.kubernetes.container.hash: 296ed226,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a8790c1933ac4cdef16ec3897307168184dfc01dbd23770f2d2d0dc2fe7dd23,PodSandboxId:13b8f60be197609723854b8cc7d503c9689bb79414a96ad2352648ba5c93fefa,Metadata:&Contain
erMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1698796744576063973,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-060181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:561b5a547e3498f8a515c533e618797d7c8fcd8bee2d7a19d68b52c7ffc3cf4e,PodSandboxId:0f7afccd05fae0ca5356c533ddda61b5d3c973abf39c7b9f8c4017b4be84b206,Metadata:&ContainerMeta
data{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1698796744490798593,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-060181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7741478932519f8828c53eaf355ef5fd,},Annotations:map[string]string{io.kubernetes.container.hash: 82b903f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7f4ba6b038325886d3d32ad6703d5cbb61a377cdfec995bcc034eefd630e86b,PodSandboxId:c822d65aa38bf24293b269816d7a48c0917b6f1e8e19921ef6c1bc697bf4d527,Metadata:&ContainerMetadata{N
ame:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1698796744228930732,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-060181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=49d2ff5f-8561-461f-be21-06f478a99770 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	abc9330d19389       gcr.io/google-samples/hello-app@sha256:9f3072b59865e9203c931b3ba3e8ae52e1ca3002cff46ed204bda9d37850420d            12 seconds ago      Running             hello-world-app           0                   38697f0eaabcc       hello-world-app-5f5d8b66bb-wgwnb
	179abe355ba52       docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d                    2 minutes ago       Running             nginx                     0                   28ec1206ebf2f       nginx
	491fba860904c       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   15873d693cbbe       ingress-nginx-controller-7fcf777cb7-cvwzp
	92a92db3c0c6f       a013daf8730dbb3908d66f67c57053f09055fddb28fde0b5808cb24c27900dc8                                                   2 minutes ago       Exited              patch                     2                   f274181a1aa9c       ingress-nginx-admission-patch-jn7tf
	e084274fba1e7       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   13846515861a6       ingress-nginx-admission-create-tvwbt
	0caf7ff33b7dd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   4 minutes ago       Running             storage-provisioner       0                   eb45db8522bd4       storage-provisioner
	3d5256c54cf44       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   4 minutes ago       Running             kube-proxy                0                   39f3c6e17f7dd       kube-proxy-v75v6
	3590e6390d653       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   4 minutes ago       Running             coredns                   0                   f8b1a04d8f83d       coredns-66bff467f8-nlkg9
	8d729dee6c0e6       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   4 minutes ago       Running             etcd                      0                   44ea355bddac2       etcd-ingress-addon-legacy-060181
	5a8790c1933ac       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   4 minutes ago       Running             kube-scheduler            0                   13b8f60be1976       kube-scheduler-ingress-addon-legacy-060181
	561b5a547e349       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   4 minutes ago       Running             kube-apiserver            0                   0f7afccd05fae       kube-apiserver-ingress-addon-legacy-060181
	e7f4ba6b03832       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   4 minutes ago       Running             kube-controller-manager   0                   c822d65aa38bf       kube-controller-manager-ingress-addon-legacy-060181
	
	* 
	* ==> coredns [3590e6390d653680c69c71636dbd8705bb4deef34ab96eedb2e51ab8d1973888] <==
	* [INFO] 10.244.0.5:43776 - 35938 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000111887s
	[INFO] 10.244.0.5:43776 - 332 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000087069s
	[INFO] 10.244.0.5:43776 - 41728 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000081099s
	[INFO] 10.244.0.5:43776 - 60166 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000114842s
	[INFO] 10.244.0.5:56861 - 63259 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000093836s
	[INFO] 10.244.0.5:56861 - 29626 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000050658s
	[INFO] 10.244.0.5:56861 - 42923 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000068222s
	[INFO] 10.244.0.5:56861 - 37997 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000064201s
	[INFO] 10.244.0.5:56861 - 1175 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000071476s
	[INFO] 10.244.0.5:56861 - 62755 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00008621s
	[INFO] 10.244.0.5:56861 - 17853 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000054099s
	[INFO] 10.244.0.5:60569 - 7155 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000081358s
	[INFO] 10.244.0.5:45556 - 24868 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000095954s
	[INFO] 10.244.0.5:45556 - 9054 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000060012s
	[INFO] 10.244.0.5:45556 - 7250 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000041136s
	[INFO] 10.244.0.5:60569 - 14246 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000024615s
	[INFO] 10.244.0.5:45556 - 40472 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000038198s
	[INFO] 10.244.0.5:60569 - 4941 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00002465s
	[INFO] 10.244.0.5:45556 - 61783 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000070725s
	[INFO] 10.244.0.5:60569 - 65408 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000196051s
	[INFO] 10.244.0.5:45556 - 2938 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000038824s
	[INFO] 10.244.0.5:60569 - 3507 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000073854s
	[INFO] 10.244.0.5:60569 - 52356 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000037526s
	[INFO] 10.244.0.5:45556 - 16249 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000058262s
	[INFO] 10.244.0.5:60569 - 38370 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000044701s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-060181
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-060181
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9
	                    minikube.k8s.io/name=ingress-addon-legacy-060181
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_31T23_59_13_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 Oct 2023 23:59:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-060181
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Nov 2023 00:03:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Nov 2023 00:01:13 +0000   Tue, 31 Oct 2023 23:59:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Nov 2023 00:01:13 +0000   Tue, 31 Oct 2023 23:59:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Nov 2023 00:01:13 +0000   Tue, 31 Oct 2023 23:59:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Nov 2023 00:01:13 +0000   Tue, 31 Oct 2023 23:59:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.88
	  Hostname:    ingress-addon-legacy-060181
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012808Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012808Ki
	  pods:               110
	System Info:
	  Machine ID:                 f4cbaa322ee1434b854967a0bbcd9505
	  System UUID:                f4cbaa32-2ee1-434b-8549-67a0bbcd9505
	  Boot ID:                    8a7d6b89-2772-49ca-8189-a6be1b42d036
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-wgwnb                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m41s
	  kube-system                 coredns-66bff467f8-nlkg9                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m5s
	  kube-system                 etcd-ingress-addon-legacy-060181                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-apiserver-ingress-addon-legacy-060181             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-060181    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-proxy-v75v6                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	  kube-system                 kube-scheduler-ingress-addon-legacy-060181             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 4m30s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m30s (x3 over 4m30s)  kubelet     Node ingress-addon-legacy-060181 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m30s (x3 over 4m30s)  kubelet     Node ingress-addon-legacy-060181 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m30s (x3 over 4m30s)  kubelet     Node ingress-addon-legacy-060181 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m30s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 4m19s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m19s                  kubelet     Node ingress-addon-legacy-060181 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m19s                  kubelet     Node ingress-addon-legacy-060181 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m19s                  kubelet     Node ingress-addon-legacy-060181 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m19s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m9s                   kubelet     Node ingress-addon-legacy-060181 status is now: NodeReady
	  Normal  Starting                 4m3s                   kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Oct31 23:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.088165] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.434014] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.936478] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.122154] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.995334] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +11.047107] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.115153] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.158237] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.120304] systemd-fstab-generator[681]: Ignoring "noauto" for root device
	[  +0.215926] systemd-fstab-generator[705]: Ignoring "noauto" for root device
	[  +7.992599] systemd-fstab-generator[1030]: Ignoring "noauto" for root device
	[Oct31 23:59] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +9.903167] systemd-fstab-generator[1418]: Ignoring "noauto" for root device
	[ +15.804269] kauditd_printk_skb: 6 callbacks suppressed
	[Nov 1 00:00] kauditd_printk_skb: 11 callbacks suppressed
	[ +14.047011] kauditd_printk_skb: 6 callbacks suppressed
	[ +12.289138] kauditd_printk_skb: 3 callbacks suppressed
	[ +23.271620] kauditd_printk_skb: 3 callbacks suppressed
	[Nov 1 00:03] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [8d729dee6c0e676228c5be374b11e667c336f9ce481b1bb097d0b991df58e4ce] <==
	* 2023-10-31 23:59:05.972999 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-10-31 23:59:05.974062 I | etcdserver: aa0bd43d5988e1af as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/10/31 23:59:05 INFO: aa0bd43d5988e1af switched to configuration voters=(12253120571151802799)
	2023-10-31 23:59:05.975111 I | etcdserver/membership: added member aa0bd43d5988e1af [https://192.168.39.88:2380] to cluster 9f9d2ecdb39156b6
	2023-10-31 23:59:05.975983 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-31 23:59:05.976101 I | embed: listening for peers on 192.168.39.88:2380
	2023-10-31 23:59:05.976233 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/10/31 23:59:06 INFO: aa0bd43d5988e1af is starting a new election at term 1
	raft2023/10/31 23:59:06 INFO: aa0bd43d5988e1af became candidate at term 2
	raft2023/10/31 23:59:06 INFO: aa0bd43d5988e1af received MsgVoteResp from aa0bd43d5988e1af at term 2
	raft2023/10/31 23:59:06 INFO: aa0bd43d5988e1af became leader at term 2
	raft2023/10/31 23:59:06 INFO: raft.node: aa0bd43d5988e1af elected leader aa0bd43d5988e1af at term 2
	2023-10-31 23:59:06.463255 I | etcdserver: setting up the initial cluster version to 3.4
	2023-10-31 23:59:06.464832 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-10-31 23:59:06.464923 I | etcdserver/api: enabled capabilities for version 3.4
	2023-10-31 23:59:06.465005 I | etcdserver: published {Name:ingress-addon-legacy-060181 ClientURLs:[https://192.168.39.88:2379]} to cluster 9f9d2ecdb39156b6
	2023-10-31 23:59:06.465042 I | embed: ready to serve client requests
	2023-10-31 23:59:06.465320 I | embed: ready to serve client requests
	2023-10-31 23:59:06.466631 I | embed: serving client requests on 192.168.39.88:2379
	2023-10-31 23:59:06.467736 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-31 23:59:28.045550 W | etcdserver: request "header:<ID:16262370196448406866 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/coredns-66bff467f8-nlkg9\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-66bff467f8-nlkg9\" value_size:3275 >> failure:<>>" with result "size:16" took too long (516.730275ms) to execute
	2023-10-31 23:59:28.051204 W | etcdserver: read-only range request "key:\"/registry/deployments/kube-system/coredns\" " with result "range_response_count:1 size:3868" took too long (531.227891ms) to execute
	2023-10-31 23:59:28.051575 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:1 size:209" took too long (325.320705ms) to execute
	2023-10-31 23:59:28.051825 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-66bff467f8-zv7tl\" " with result "range_response_count:1 size:3753" took too long (531.024375ms) to execute
	2023-10-31 23:59:28.052721 W | etcdserver: read-only range request "key:\"/registry/clusterroles/admin\" " with result "range_response_count:1 size:3325" took too long (568.996189ms) to execute
	
	* 
	* ==> kernel <==
	*  00:03:32 up 5 min,  0 users,  load average: 0.24, 0.28, 0.14
	Linux ingress-addon-legacy-060181 5.10.57 #1 SMP Tue Oct 31 22:14:31 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [561b5a547e3498f8a515c533e618797d7c8fcd8bee2d7a19d68b52c7ffc3cf4e] <==
	* Trace[859170436]: [551.356937ms] [547.08323ms] Object stored in database
	I1031 23:59:28.051244       1 trace.go:116] Trace[365474935]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/system:serviceaccount:kube-system:replicaset-controller,client:192.168.39.88 (started: 2023-10-31 23:59:27.465400472 +0000 UTC m=+22.805742189) (total time: 585.822479ms):
	Trace[365474935]: [585.78235ms] [585.560591ms] Object stored in database
	I1031 23:59:28.051525       1 trace.go:116] Trace[1710574014]: "GuaranteedUpdate etcd3" type:*apps.DaemonSet (started: 2023-10-31 23:59:27.476646963 +0000 UTC m=+22.816988643) (total time: 574.862836ms):
	Trace[1710574014]: [574.7494ms] [573.890972ms] Transaction committed
	I1031 23:59:28.052997       1 trace.go:116] Trace[1560532495]: "Update" url:/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy/status,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/system:serviceaccount:kube-system:daemon-set-controller,client:192.168.39.88 (started: 2023-10-31 23:59:27.472696069 +0000 UTC m=+22.813037729) (total time: 580.231808ms):
	Trace[1560532495]: [579.683414ms] [575.849324ms] Object stored in database
	I1031 23:59:28.053268       1 trace.go:116] Trace[1201416172]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/system:serviceaccount:kube-system:replicaset-controller,client:192.168.39.88 (started: 2023-10-31 23:59:27.475883876 +0000 UTC m=+22.816225558) (total time: 577.361332ms):
	Trace[1201416172]: [577.329902ms] [577.164545ms] Object stored in database
	I1031 23:59:28.053275       1 trace.go:116] Trace[83605651]: "GuaranteedUpdate etcd3" type:*core.Pod (started: 2023-10-31 23:59:27.504851205 +0000 UTC m=+22.845192886) (total time: 548.409998ms):
	Trace[83605651]: [548.358349ms] [547.044502ms] Transaction committed
	I1031 23:59:28.053402       1 trace.go:116] Trace[277099033]: "Update" url:/api/v1/namespaces/kube-system/pods/coredns-66bff467f8-zv7tl/status,user-agent:kube-scheduler/v1.18.20 (linux/amd64) kubernetes/1f3e19b/scheduler,client:192.168.39.88 (started: 2023-10-31 23:59:27.504650417 +0000 UTC m=+22.844992083) (total time: 548.735083ms):
	Trace[277099033]: [548.686211ms] [548.544598ms] Object stored in database
	I1031 23:59:28.055919       1 trace.go:116] Trace[1136099920]: "GuaranteedUpdate etcd3" type:*rbac.ClusterRole (started: 2023-10-31 23:59:27.476704767 +0000 UTC m=+22.817046469) (total time: 579.072782ms):
	Trace[1136099920]: [579.072782ms] [579.022675ms] END
	I1031 23:59:28.056003       1 trace.go:116] Trace[1819259325]: "Update" url:/apis/rbac.authorization.k8s.io/v1/clusterroles/admin,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/system:serviceaccount:kube-system:clusterrole-aggregation-controller,client:192.168.39.88 (started: 2023-10-31 23:59:27.475739644 +0000 UTC m=+22.816081310) (total time: 580.247657ms):
	Trace[1819259325]: [580.247657ms] [579.341845ms] END
	I1031 23:59:28.057584       1 trace.go:116] Trace[1410925788]: "GuaranteedUpdate etcd3" type:*apps.Deployment (started: 2023-10-31 23:59:27.50729789 +0000 UTC m=+22.847639572) (total time: 550.270988ms):
	Trace[1410925788]: [550.270988ms] [550.213583ms] END
	I1031 23:59:28.057636       1 trace.go:116] Trace[1880662555]: "Update" url:/apis/apps/v1/namespaces/kube-system/deployments/coredns/status,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/system:serviceaccount:kube-system:deployment-controller,client:192.168.39.88 (started: 2023-10-31 23:59:27.507119179 +0000 UTC m=+22.847460852) (total time: 550.504249ms):
	Trace[1880662555]: [550.504249ms] [550.375278ms] END
	I1031 23:59:28.057840       1 trace.go:116] Trace[1415150894]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-66bff467f8-zv7tl,user-agent:kube-scheduler/v1.18.20 (linux/amd64) kubernetes/1f3e19b/scheduler,client:192.168.39.88 (started: 2023-10-31 23:59:27.498877762 +0000 UTC m=+22.839219440) (total time: 558.948306ms):
	Trace[1415150894]: [558.903944ms] [558.89303ms] About to write a response
	I1101 00:00:10.315410       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1101 00:00:51.615776       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [e7f4ba6b038325886d3d32ad6703d5cbb61a377cdfec995bcc034eefd630e86b] <==
	* I1031 23:59:27.665038       1 disruption.go:339] Sending events to api server.
	I1031 23:59:27.714961       1 shared_informer.go:230] Caches are synced for persistent volume 
	I1031 23:59:27.760524       1 shared_informer.go:230] Caches are synced for expand 
	I1031 23:59:27.785404       1 shared_informer.go:230] Caches are synced for PV protection 
	I1031 23:59:27.809575       1 shared_informer.go:230] Caches are synced for attach detach 
	I1031 23:59:27.883111       1 shared_informer.go:230] Caches are synced for resource quota 
	I1031 23:59:27.906822       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1031 23:59:27.906904       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1031 23:59:27.909911       1 shared_informer.go:230] Caches are synced for endpoint 
	I1031 23:59:27.935205       1 shared_informer.go:230] Caches are synced for resource quota 
	I1031 23:59:27.959789       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I1031 23:59:27.961003       1 shared_informer.go:230] Caches are synced for garbage collector 
	E1031 23:59:28.056415       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I1031 23:59:28.058302       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"b3935aa3-7076-4b89-91b4-29bdb608caca", APIVersion:"apps/v1", ResourceVersion:"327", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-nlkg9
	I1031 23:59:28.316571       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"518dfe05-ac6b-4a10-ba7f-7d4b7f1f739c", APIVersion:"apps/v1", ResourceVersion:"361", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1031 23:59:28.344751       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"b3935aa3-7076-4b89-91b4-29bdb608caca", APIVersion:"apps/v1", ResourceVersion:"363", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-zv7tl
	I1101 00:00:10.263568       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"971df31f-c1a9-4d59-8552-8e57c0edf568", APIVersion:"apps/v1", ResourceVersion:"473", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1101 00:00:10.287630       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"2c8ad0ab-7ae6-4602-a3e5-80ddfa22a9ba", APIVersion:"apps/v1", ResourceVersion:"474", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-cvwzp
	I1101 00:00:10.345330       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"4426b24e-579e-4929-9128-c24cc97d11e8", APIVersion:"batch/v1", ResourceVersion:"482", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-tvwbt
	I1101 00:00:10.406638       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"9f440724-afbe-4b88-8c26-fd9b2eff308f", APIVersion:"batch/v1", ResourceVersion:"492", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-jn7tf
	I1101 00:00:21.794985       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"4426b24e-579e-4929-9128-c24cc97d11e8", APIVersion:"batch/v1", ResourceVersion:"491", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1101 00:00:34.069237       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"9f440724-afbe-4b88-8c26-fd9b2eff308f", APIVersion:"batch/v1", ResourceVersion:"500", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1101 00:03:15.523270       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"acb3b6bc-df93-44a0-a843-f91f79a53fbb", APIVersion:"apps/v1", ResourceVersion:"721", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1101 00:03:15.545602       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"24809539-b612-41a1-b056-d03efcfe3158", APIVersion:"apps/v1", ResourceVersion:"722", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-wgwnb
	E1101 00:03:28.958756       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-v6f8n" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [3d5256c54cf44e9d17fb2a2c3705117cd938ee6675e46c6e04035d0711e573d5] <==
	* W1031 23:59:29.763171       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1031 23:59:29.773717       1 node.go:136] Successfully retrieved node IP: 192.168.39.88
	I1031 23:59:29.773768       1 server_others.go:186] Using iptables Proxier.
	I1031 23:59:29.774485       1 server.go:583] Version: v1.18.20
	I1031 23:59:29.776416       1 config.go:133] Starting endpoints config controller
	I1031 23:59:29.776528       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1031 23:59:29.776554       1 config.go:315] Starting service config controller
	I1031 23:59:29.776558       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1031 23:59:29.877060       1 shared_informer.go:230] Caches are synced for service config 
	I1031 23:59:29.877140       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [5a8790c1933ac4cdef16ec3897307168184dfc01dbd23770f2d2d0dc2fe7dd23] <==
	* I1031 23:59:09.519615       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I1031 23:59:09.524296       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1031 23:59:09.524383       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1031 23:59:09.525540       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1031 23:59:09.525615       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1031 23:59:09.525674       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1031 23:59:09.525740       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1031 23:59:09.525797       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1031 23:59:09.528298       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1031 23:59:09.529371       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1031 23:59:09.529380       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1031 23:59:09.529593       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1031 23:59:09.529849       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1031 23:59:09.530062       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1031 23:59:09.532913       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1031 23:59:10.359887       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1031 23:59:10.412191       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1031 23:59:10.495608       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1031 23:59:10.529057       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1031 23:59:10.591279       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1031 23:59:10.707342       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1031 23:59:10.724580       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1031 23:59:10.729661       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1031 23:59:10.771261       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1031 23:59:13.525911       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-10-31 23:58:35 UTC, ends at Wed 2023-11-01 00:03:32 UTC. --
	Nov 01 00:00:51 ingress-addon-legacy-060181 kubelet[1425]: I1101 00:00:51.936295    1425 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-tf5xv" (UniqueName: "kubernetes.io/secret/04f9bdf4-9b60-4627-ad0b-34a6e7e61432-default-token-tf5xv") pod "nginx" (UID: "04f9bdf4-9b60-4627-ad0b-34a6e7e61432")
	Nov 01 00:01:31 ingress-addon-legacy-060181 kubelet[1425]: E1101 00:01:31.103050    1425 kubelet.go:1703] Unable to attach or mount volumes for pod "coredns-66bff467f8-zv7tl_kube-system(99055fa3-4688-4dc2-8d33-bc11c80b03ce)": unmounted volumes=[config-volume coredns-token-q455z], unattached volumes=[config-volume coredns-token-q455z]: timed out waiting for the condition; skipping pod
	Nov 01 00:01:31 ingress-addon-legacy-060181 kubelet[1425]: E1101 00:01:31.103101    1425 pod_workers.go:191] Error syncing pod 99055fa3-4688-4dc2-8d33-bc11c80b03ce ("coredns-66bff467f8-zv7tl_kube-system(99055fa3-4688-4dc2-8d33-bc11c80b03ce)"), skipping: unmounted volumes=[config-volume coredns-token-q455z], unattached volumes=[config-volume coredns-token-q455z]: timed out waiting for the condition
	Nov 01 00:03:15 ingress-addon-legacy-060181 kubelet[1425]: I1101 00:03:15.564279    1425 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Nov 01 00:03:15 ingress-addon-legacy-060181 kubelet[1425]: I1101 00:03:15.617386    1425 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-tf5xv" (UniqueName: "kubernetes.io/secret/7b855959-222a-40dd-9094-d85fd243d217-default-token-tf5xv") pod "hello-world-app-5f5d8b66bb-wgwnb" (UID: "7b855959-222a-40dd-9094-d85fd243d217")
	Nov 01 00:03:16 ingress-addon-legacy-060181 kubelet[1425]: E1101 00:03:16.520695    1425 secret.go:195] Couldn't get secret kube-system/minikube-ingress-dns-token-sblxj: secret "minikube-ingress-dns-token-sblxj" not found
	Nov 01 00:03:16 ingress-addon-legacy-060181 kubelet[1425]: E1101 00:03:16.520814    1425 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/e52bda3d-abf4-416e-af23-e7290070a0b5-minikube-ingress-dns-token-sblxj podName:e52bda3d-abf4-416e-af23-e7290070a0b5 nodeName:}" failed. No retries permitted until 2023-11-01 00:03:17.020779027 +0000 UTC m=+244.137445483 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"minikube-ingress-dns-token-sblxj\" (UniqueName: \"kubernetes.io/secret/e52bda3d-abf4-416e-af23-e7290070a0b5-minikube-ingress-dns-token-sblxj\") pod \"kube-ingress-dns-minikube\" (UID: \"e52bda3d-abf4-416e-af23-e7290070a0b5\") : secret \"minikube-ingress-dns-token-sblxj\" not found"
	Nov 01 00:03:17 ingress-addon-legacy-060181 kubelet[1425]: E1101 00:03:17.022783    1425 secret.go:195] Couldn't get secret kube-system/minikube-ingress-dns-token-sblxj: secret "minikube-ingress-dns-token-sblxj" not found
	Nov 01 00:03:17 ingress-addon-legacy-060181 kubelet[1425]: E1101 00:03:17.022888    1425 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/e52bda3d-abf4-416e-af23-e7290070a0b5-minikube-ingress-dns-token-sblxj podName:e52bda3d-abf4-416e-af23-e7290070a0b5 nodeName:}" failed. No retries permitted until 2023-11-01 00:03:18.022866241 +0000 UTC m=+245.139532702 (durationBeforeRetry 1s). Error: "MountVolume.SetUp failed for volume \"minikube-ingress-dns-token-sblxj\" (UniqueName: \"kubernetes.io/secret/e52bda3d-abf4-416e-af23-e7290070a0b5-minikube-ingress-dns-token-sblxj\") pod \"kube-ingress-dns-minikube\" (UID: \"e52bda3d-abf4-416e-af23-e7290070a0b5\") : secret \"minikube-ingress-dns-token-sblxj\" not found"
	Nov 01 00:03:17 ingress-addon-legacy-060181 kubelet[1425]: I1101 00:03:17.732943    1425 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 88c01f2376d7ccb2ca3edd3c0625f7cd2ee1feaa7ac80b753549c8bf7f8928dc
	Nov 01 00:03:17 ingress-addon-legacy-060181 kubelet[1425]: I1101 00:03:17.825166    1425 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-sblxj" (UniqueName: "kubernetes.io/secret/e52bda3d-abf4-416e-af23-e7290070a0b5-minikube-ingress-dns-token-sblxj") pod "e52bda3d-abf4-416e-af23-e7290070a0b5" (UID: "e52bda3d-abf4-416e-af23-e7290070a0b5")
	Nov 01 00:03:17 ingress-addon-legacy-060181 kubelet[1425]: I1101 00:03:17.827595    1425 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e52bda3d-abf4-416e-af23-e7290070a0b5-minikube-ingress-dns-token-sblxj" (OuterVolumeSpecName: "minikube-ingress-dns-token-sblxj") pod "e52bda3d-abf4-416e-af23-e7290070a0b5" (UID: "e52bda3d-abf4-416e-af23-e7290070a0b5"). InnerVolumeSpecName "minikube-ingress-dns-token-sblxj". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 01 00:03:17 ingress-addon-legacy-060181 kubelet[1425]: I1101 00:03:17.926125    1425 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-sblxj" (UniqueName: "kubernetes.io/secret/e52bda3d-abf4-416e-af23-e7290070a0b5-minikube-ingress-dns-token-sblxj") on node "ingress-addon-legacy-060181" DevicePath ""
	Nov 01 00:03:18 ingress-addon-legacy-060181 kubelet[1425]: I1101 00:03:18.145329    1425 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 88c01f2376d7ccb2ca3edd3c0625f7cd2ee1feaa7ac80b753549c8bf7f8928dc
	Nov 01 00:03:18 ingress-addon-legacy-060181 kubelet[1425]: E1101 00:03:18.146113    1425 remote_runtime.go:295] ContainerStatus "88c01f2376d7ccb2ca3edd3c0625f7cd2ee1feaa7ac80b753549c8bf7f8928dc" from runtime service failed: rpc error: code = NotFound desc = could not find container "88c01f2376d7ccb2ca3edd3c0625f7cd2ee1feaa7ac80b753549c8bf7f8928dc": container with ID starting with 88c01f2376d7ccb2ca3edd3c0625f7cd2ee1feaa7ac80b753549c8bf7f8928dc not found: ID does not exist
	Nov 01 00:03:24 ingress-addon-legacy-060181 kubelet[1425]: E1101 00:03:24.282861    1425 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-cvwzp.179356dff17b374b", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-cvwzp", UID:"ea65cf5b-06f8-46dd-a61a-7e0784a1963e", APIVersion:"v1", ResourceVersion:"478", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-060181"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc148835310903f4b, ext:251394555292, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc148835310903f4b, ext:251394555292, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-cvwzp.179356dff17b374b" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Nov 01 00:03:24 ingress-addon-legacy-060181 kubelet[1425]: E1101 00:03:24.300924    1425 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-cvwzp.179356dff17b374b", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-cvwzp", UID:"ea65cf5b-06f8-46dd-a61a-7e0784a1963e", APIVersion:"v1", ResourceVersion:"478", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-060181"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc148835310903f4b, ext:251394555292, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc148835311249d2e, ext:251404278658, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-cvwzp.179356dff17b374b" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Nov 01 00:03:26 ingress-addon-legacy-060181 kubelet[1425]: W1101 00:03:26.769788    1425 pod_container_deletor.go:77] Container "15873d693cbbe2bafe959360e0db8a11cb2b9adb3c25c78306a5eb456d5cb332" not found in pod's containers
	Nov 01 00:03:28 ingress-addon-legacy-060181 kubelet[1425]: I1101 00:03:28.364088    1425 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/ea65cf5b-06f8-46dd-a61a-7e0784a1963e-webhook-cert") pod "ea65cf5b-06f8-46dd-a61a-7e0784a1963e" (UID: "ea65cf5b-06f8-46dd-a61a-7e0784a1963e")
	Nov 01 00:03:28 ingress-addon-legacy-060181 kubelet[1425]: I1101 00:03:28.364168    1425 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-wjff8" (UniqueName: "kubernetes.io/secret/ea65cf5b-06f8-46dd-a61a-7e0784a1963e-ingress-nginx-token-wjff8") pod "ea65cf5b-06f8-46dd-a61a-7e0784a1963e" (UID: "ea65cf5b-06f8-46dd-a61a-7e0784a1963e")
	Nov 01 00:03:28 ingress-addon-legacy-060181 kubelet[1425]: I1101 00:03:28.368523    1425 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea65cf5b-06f8-46dd-a61a-7e0784a1963e-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "ea65cf5b-06f8-46dd-a61a-7e0784a1963e" (UID: "ea65cf5b-06f8-46dd-a61a-7e0784a1963e"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 01 00:03:28 ingress-addon-legacy-060181 kubelet[1425]: I1101 00:03:28.369278    1425 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea65cf5b-06f8-46dd-a61a-7e0784a1963e-ingress-nginx-token-wjff8" (OuterVolumeSpecName: "ingress-nginx-token-wjff8") pod "ea65cf5b-06f8-46dd-a61a-7e0784a1963e" (UID: "ea65cf5b-06f8-46dd-a61a-7e0784a1963e"). InnerVolumeSpecName "ingress-nginx-token-wjff8". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 01 00:03:28 ingress-addon-legacy-060181 kubelet[1425]: I1101 00:03:28.464540    1425 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/ea65cf5b-06f8-46dd-a61a-7e0784a1963e-webhook-cert") on node "ingress-addon-legacy-060181" DevicePath ""
	Nov 01 00:03:28 ingress-addon-legacy-060181 kubelet[1425]: I1101 00:03:28.464608    1425 reconciler.go:319] Volume detached for volume "ingress-nginx-token-wjff8" (UniqueName: "kubernetes.io/secret/ea65cf5b-06f8-46dd-a61a-7e0784a1963e-ingress-nginx-token-wjff8") on node "ingress-addon-legacy-060181" DevicePath ""
	Nov 01 00:03:29 ingress-addon-legacy-060181 kubelet[1425]: W1101 00:03:29.420061    1425 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/ea65cf5b-06f8-46dd-a61a-7e0784a1963e/volumes" does not exist
	
	* 
	* ==> storage-provisioner [0caf7ff33b7dd32c6e5d4994f1b5bf65d9aa26bd3edb43838317ee161c4c1fcc] <==
	* I1031 23:59:30.162963       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1031 23:59:30.172596       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1031 23:59:30.172648       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1031 23:59:30.185555       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1031 23:59:30.186086       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-060181_5dcd1c9f-2b45-4272-8ba8-93f184fce7c7!
	I1031 23:59:30.188069       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d05bc027-24ee-4333-b264-e04858ce10cb", APIVersion:"v1", ResourceVersion:"399", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-060181_5dcd1c9f-2b45-4272-8ba8-93f184fce7c7 became leader
	I1031 23:59:30.286669       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-060181_5dcd1c9f-2b45-4272-8ba8-93f184fce7c7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-060181 -n ingress-addon-legacy-060181
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-060181 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (177.71s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-600483 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-600483 -- exec busybox-5bc68d56bd-6jjms -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-600483 -- exec busybox-5bc68d56bd-6jjms -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-600483 -- exec busybox-5bc68d56bd-6jjms -- sh -c "ping -c 1 192.168.39.1": exit status 1 (192.221592ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-6jjms): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-600483 -- exec busybox-5bc68d56bd-8pjvd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-600483 -- exec busybox-5bc68d56bd-8pjvd -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-600483 -- exec busybox-5bc68d56bd-8pjvd -- sh -c "ping -c 1 192.168.39.1": exit status 1 (252.002896ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-8pjvd): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-600483 -n multinode-600483
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-600483 logs -n 25: (1.362420985s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| ssh     | mount-start-2-711158 ssh -- ls                    | mount-start-2-711158 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:07 UTC | 01 Nov 23 00:07 UTC |
	|         | /minikube-host                                    |                      |         |                |                     |                     |
	| ssh     | mount-start-2-711158 ssh --                       | mount-start-2-711158 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:07 UTC | 01 Nov 23 00:07 UTC |
	|         | mount | grep 9p                                   |                      |         |                |                     |                     |
	| stop    | -p mount-start-2-711158                           | mount-start-2-711158 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:07 UTC | 01 Nov 23 00:07 UTC |
	| start   | -p mount-start-2-711158                           | mount-start-2-711158 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:07 UTC | 01 Nov 23 00:07 UTC |
	| mount   | /home/jenkins:/minikube-host                      | mount-start-2-711158 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:07 UTC |                     |
	|         | --profile mount-start-2-711158                    |                      |         |                |                     |                     |
	|         | --v 0 --9p-version 9p2000.L                       |                      |         |                |                     |                     |
	|         | --gid 0 --ip  --msize 6543                        |                      |         |                |                     |                     |
	|         | --port 46465 --type 9p --uid 0                    |                      |         |                |                     |                     |
	| ssh     | mount-start-2-711158 ssh -- ls                    | mount-start-2-711158 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:07 UTC | 01 Nov 23 00:07 UTC |
	|         | /minikube-host                                    |                      |         |                |                     |                     |
	| ssh     | mount-start-2-711158 ssh --                       | mount-start-2-711158 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:07 UTC | 01 Nov 23 00:07 UTC |
	|         | mount | grep 9p                                   |                      |         |                |                     |                     |
	| delete  | -p mount-start-2-711158                           | mount-start-2-711158 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:07 UTC | 01 Nov 23 00:07 UTC |
	| delete  | -p mount-start-1-690840                           | mount-start-1-690840 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:07 UTC | 01 Nov 23 00:07 UTC |
	| start   | -p multinode-600483                               | multinode-600483     | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:07 UTC | 01 Nov 23 00:09 UTC |
	|         | --wait=true --memory=2200                         |                      |         |                |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |                |                     |                     |
	|         | --alsologtostderr                                 |                      |         |                |                     |                     |
	|         | --driver=kvm2                                     |                      |         |                |                     |                     |
	|         | --container-runtime=crio                          |                      |         |                |                     |                     |
	| kubectl | -p multinode-600483 -- apply -f                   | multinode-600483     | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:09 UTC | 01 Nov 23 00:09 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |                |                     |                     |
	| kubectl | -p multinode-600483 -- rollout                    | multinode-600483     | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:09 UTC | 01 Nov 23 00:09 UTC |
	|         | status deployment/busybox                         |                      |         |                |                     |                     |
	| kubectl | -p multinode-600483 -- get pods -o                | multinode-600483     | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:09 UTC | 01 Nov 23 00:09 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |                |                     |                     |
	| kubectl | -p multinode-600483 -- get pods -o                | multinode-600483     | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:09 UTC | 01 Nov 23 00:09 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |                |                     |                     |
	| kubectl | -p multinode-600483 -- exec                       | multinode-600483     | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:09 UTC | 01 Nov 23 00:09 UTC |
	|         | busybox-5bc68d56bd-6jjms --                       |                      |         |                |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |                |                     |                     |
	| kubectl | -p multinode-600483 -- exec                       | multinode-600483     | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:09 UTC | 01 Nov 23 00:09 UTC |
	|         | busybox-5bc68d56bd-8pjvd --                       |                      |         |                |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |                |                     |                     |
	| kubectl | -p multinode-600483 -- exec                       | multinode-600483     | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:09 UTC | 01 Nov 23 00:09 UTC |
	|         | busybox-5bc68d56bd-6jjms --                       |                      |         |                |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |                |                     |                     |
	| kubectl | -p multinode-600483 -- exec                       | multinode-600483     | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:09 UTC | 01 Nov 23 00:09 UTC |
	|         | busybox-5bc68d56bd-8pjvd --                       |                      |         |                |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |                |                     |                     |
	| kubectl | -p multinode-600483 -- exec                       | multinode-600483     | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:09 UTC | 01 Nov 23 00:09 UTC |
	|         | busybox-5bc68d56bd-6jjms -- nslookup              |                      |         |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |                |                     |                     |
	| kubectl | -p multinode-600483 -- exec                       | multinode-600483     | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:09 UTC | 01 Nov 23 00:09 UTC |
	|         | busybox-5bc68d56bd-8pjvd -- nslookup              |                      |         |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |                |                     |                     |
	| kubectl | -p multinode-600483 -- get pods -o                | multinode-600483     | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:09 UTC | 01 Nov 23 00:09 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |                |                     |                     |
	| kubectl | -p multinode-600483 -- exec                       | multinode-600483     | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:09 UTC | 01 Nov 23 00:09 UTC |
	|         | busybox-5bc68d56bd-6jjms                          |                      |         |                |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |                |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |                |                     |                     |
	| kubectl | -p multinode-600483 -- exec                       | multinode-600483     | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:09 UTC |                     |
	|         | busybox-5bc68d56bd-6jjms -- sh                    |                      |         |                |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |                |                     |                     |
	| kubectl | -p multinode-600483 -- exec                       | multinode-600483     | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:09 UTC | 01 Nov 23 00:09 UTC |
	|         | busybox-5bc68d56bd-8pjvd                          |                      |         |                |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |                |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |                |                     |                     |
	| kubectl | -p multinode-600483 -- exec                       | multinode-600483     | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:09 UTC |                     |
	|         | busybox-5bc68d56bd-8pjvd -- sh                    |                      |         |                |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |                |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/01 00:07:44
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 00:07:44.991316   26955 out.go:296] Setting OutFile to fd 1 ...
	I1101 00:07:44.991586   26955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:07:44.991596   26955 out.go:309] Setting ErrFile to fd 2...
	I1101 00:07:44.991601   26955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:07:44.991800   26955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7305/.minikube/bin
	I1101 00:07:44.992408   26955 out.go:303] Setting JSON to false
	I1101 00:07:44.993346   26955 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3010,"bootTime":1698794255,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 00:07:44.993407   26955 start.go:138] virtualization: kvm guest
	I1101 00:07:44.995916   26955 out.go:177] * [multinode-600483] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1101 00:07:44.997630   26955 notify.go:220] Checking for updates...
	I1101 00:07:44.997661   26955 out.go:177]   - MINIKUBE_LOCATION=17486
	I1101 00:07:44.999814   26955 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 00:07:45.001426   26955 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 00:07:45.003178   26955 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7305/.minikube
	I1101 00:07:45.004755   26955 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 00:07:45.006203   26955 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 00:07:45.007741   26955 driver.go:378] Setting default libvirt URI to qemu:///system
	I1101 00:07:45.043422   26955 out.go:177] * Using the kvm2 driver based on user configuration
	I1101 00:07:45.044836   26955 start.go:298] selected driver: kvm2
	I1101 00:07:45.044851   26955 start.go:902] validating driver "kvm2" against <nil>
	I1101 00:07:45.044862   26955 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 00:07:45.045534   26955 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:07:45.045618   26955 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17486-7305/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1101 00:07:45.060500   26955 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1101 00:07:45.060554   26955 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1101 00:07:45.060756   26955 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 00:07:45.060786   26955 cni.go:84] Creating CNI manager for ""
	I1101 00:07:45.060791   26955 cni.go:136] 0 nodes found, recommending kindnet
	I1101 00:07:45.060800   26955 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 00:07:45.060814   26955 start_flags.go:323] config:
	{Name:multinode-600483 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-600483 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:07:45.060927   26955 iso.go:125] acquiring lock: {Name:mk1f649ca0b7c1ae293cd66cb85f9eeda028b20b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:07:45.062794   26955 out.go:177] * Starting control plane node multinode-600483 in cluster multinode-600483
	I1101 00:07:45.064257   26955 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 00:07:45.064307   26955 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1101 00:07:45.064315   26955 cache.go:56] Caching tarball of preloaded images
	I1101 00:07:45.064422   26955 preload.go:174] Found /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 00:07:45.064436   26955 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1101 00:07:45.064755   26955 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/config.json ...
	I1101 00:07:45.064783   26955 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/config.json: {Name:mke9f5c3a89fecf871804fa60090a8f3f77f975f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:07:45.064919   26955 start.go:365] acquiring machines lock for multinode-600483: {Name:mk7aad88408c319111b9be8e59d9593a9e88374b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 00:07:45.064951   26955 start.go:369] acquired machines lock for "multinode-600483" in 19.11µs
	I1101 00:07:45.064974   26955 start.go:93] Provisioning new machine with config: &{Name:multinode-600483 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.3 ClusterName:multinode-600483 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 00:07:45.065045   26955 start.go:125] createHost starting for "" (driver="kvm2")
	I1101 00:07:45.066863   26955 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1101 00:07:45.067014   26955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1101 00:07:45.067059   26955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:07:45.081304   26955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41683
	I1101 00:07:45.081807   26955 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:07:45.082319   26955 main.go:141] libmachine: Using API Version  1
	I1101 00:07:45.082339   26955 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:07:45.082641   26955 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:07:45.082810   26955 main.go:141] libmachine: (multinode-600483) Calling .GetMachineName
	I1101 00:07:45.082996   26955 main.go:141] libmachine: (multinode-600483) Calling .DriverName
	I1101 00:07:45.083162   26955 start.go:159] libmachine.API.Create for "multinode-600483" (driver="kvm2")
	I1101 00:07:45.083202   26955 client.go:168] LocalClient.Create starting
	I1101 00:07:45.083233   26955 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem
	I1101 00:07:45.083270   26955 main.go:141] libmachine: Decoding PEM data...
	I1101 00:07:45.083287   26955 main.go:141] libmachine: Parsing certificate...
	I1101 00:07:45.083368   26955 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem
	I1101 00:07:45.083401   26955 main.go:141] libmachine: Decoding PEM data...
	I1101 00:07:45.083424   26955 main.go:141] libmachine: Parsing certificate...
	I1101 00:07:45.083450   26955 main.go:141] libmachine: Running pre-create checks...
	I1101 00:07:45.083477   26955 main.go:141] libmachine: (multinode-600483) Calling .PreCreateCheck
	I1101 00:07:45.083982   26955 main.go:141] libmachine: (multinode-600483) Calling .GetConfigRaw
	I1101 00:07:45.084462   26955 main.go:141] libmachine: Creating machine...
	I1101 00:07:45.084481   26955 main.go:141] libmachine: (multinode-600483) Calling .Create
	I1101 00:07:45.084627   26955 main.go:141] libmachine: (multinode-600483) Creating KVM machine...
	I1101 00:07:45.085963   26955 main.go:141] libmachine: (multinode-600483) DBG | found existing default KVM network
	I1101 00:07:45.086780   26955 main.go:141] libmachine: (multinode-600483) DBG | I1101 00:07:45.086633   26978 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000204620}
	I1101 00:07:45.092282   26955 main.go:141] libmachine: (multinode-600483) DBG | trying to create private KVM network mk-multinode-600483 192.168.39.0/24...
	I1101 00:07:45.165042   26955 main.go:141] libmachine: (multinode-600483) DBG | private KVM network mk-multinode-600483 192.168.39.0/24 created
	I1101 00:07:45.165087   26955 main.go:141] libmachine: (multinode-600483) Setting up store path in /home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483 ...
	I1101 00:07:45.165108   26955 main.go:141] libmachine: (multinode-600483) DBG | I1101 00:07:45.164992   26978 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17486-7305/.minikube
	I1101 00:07:45.165158   26955 main.go:141] libmachine: (multinode-600483) Building disk image from file:///home/jenkins/minikube-integration/17486-7305/.minikube/cache/iso/amd64/minikube-v1.32.0-1698773592-17486-amd64.iso
	I1101 00:07:45.165198   26955 main.go:141] libmachine: (multinode-600483) Downloading /home/jenkins/minikube-integration/17486-7305/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17486-7305/.minikube/cache/iso/amd64/minikube-v1.32.0-1698773592-17486-amd64.iso...
	I1101 00:07:45.373298   26955 main.go:141] libmachine: (multinode-600483) DBG | I1101 00:07:45.373157   26978 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483/id_rsa...
	I1101 00:07:45.413588   26955 main.go:141] libmachine: (multinode-600483) DBG | I1101 00:07:45.413456   26978 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483/multinode-600483.rawdisk...
	I1101 00:07:45.413627   26955 main.go:141] libmachine: (multinode-600483) DBG | Writing magic tar header
	I1101 00:07:45.413647   26955 main.go:141] libmachine: (multinode-600483) DBG | Writing SSH key tar header
	I1101 00:07:45.413656   26955 main.go:141] libmachine: (multinode-600483) DBG | I1101 00:07:45.413560   26978 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483 ...
	I1101 00:07:45.413669   26955 main.go:141] libmachine: (multinode-600483) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483
	I1101 00:07:45.413711   26955 main.go:141] libmachine: (multinode-600483) Setting executable bit set on /home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483 (perms=drwx------)
	I1101 00:07:45.413752   26955 main.go:141] libmachine: (multinode-600483) Setting executable bit set on /home/jenkins/minikube-integration/17486-7305/.minikube/machines (perms=drwxr-xr-x)
	I1101 00:07:45.413774   26955 main.go:141] libmachine: (multinode-600483) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17486-7305/.minikube/machines
	I1101 00:07:45.413787   26955 main.go:141] libmachine: (multinode-600483) Setting executable bit set on /home/jenkins/minikube-integration/17486-7305/.minikube (perms=drwxr-xr-x)
	I1101 00:07:45.413801   26955 main.go:141] libmachine: (multinode-600483) Setting executable bit set on /home/jenkins/minikube-integration/17486-7305 (perms=drwxrwxr-x)
	I1101 00:07:45.413811   26955 main.go:141] libmachine: (multinode-600483) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1101 00:07:45.413823   26955 main.go:141] libmachine: (multinode-600483) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1101 00:07:45.413836   26955 main.go:141] libmachine: (multinode-600483) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17486-7305/.minikube
	I1101 00:07:45.413846   26955 main.go:141] libmachine: (multinode-600483) Creating domain...
	I1101 00:07:45.413878   26955 main.go:141] libmachine: (multinode-600483) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17486-7305
	I1101 00:07:45.413907   26955 main.go:141] libmachine: (multinode-600483) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1101 00:07:45.413926   26955 main.go:141] libmachine: (multinode-600483) DBG | Checking permissions on dir: /home/jenkins
	I1101 00:07:45.413940   26955 main.go:141] libmachine: (multinode-600483) DBG | Checking permissions on dir: /home
	I1101 00:07:45.413967   26955 main.go:141] libmachine: (multinode-600483) DBG | Skipping /home - not owner
	I1101 00:07:45.414928   26955 main.go:141] libmachine: (multinode-600483) define libvirt domain using xml: 
	I1101 00:07:45.414951   26955 main.go:141] libmachine: (multinode-600483) <domain type='kvm'>
	I1101 00:07:45.414964   26955 main.go:141] libmachine: (multinode-600483)   <name>multinode-600483</name>
	I1101 00:07:45.414978   26955 main.go:141] libmachine: (multinode-600483)   <memory unit='MiB'>2200</memory>
	I1101 00:07:45.414990   26955 main.go:141] libmachine: (multinode-600483)   <vcpu>2</vcpu>
	I1101 00:07:45.415002   26955 main.go:141] libmachine: (multinode-600483)   <features>
	I1101 00:07:45.415067   26955 main.go:141] libmachine: (multinode-600483)     <acpi/>
	I1101 00:07:45.415115   26955 main.go:141] libmachine: (multinode-600483)     <apic/>
	I1101 00:07:45.415132   26955 main.go:141] libmachine: (multinode-600483)     <pae/>
	I1101 00:07:45.415148   26955 main.go:141] libmachine: (multinode-600483)     
	I1101 00:07:45.415163   26955 main.go:141] libmachine: (multinode-600483)   </features>
	I1101 00:07:45.415176   26955 main.go:141] libmachine: (multinode-600483)   <cpu mode='host-passthrough'>
	I1101 00:07:45.415204   26955 main.go:141] libmachine: (multinode-600483)   
	I1101 00:07:45.415228   26955 main.go:141] libmachine: (multinode-600483)   </cpu>
	I1101 00:07:45.415241   26955 main.go:141] libmachine: (multinode-600483)   <os>
	I1101 00:07:45.415255   26955 main.go:141] libmachine: (multinode-600483)     <type>hvm</type>
	I1101 00:07:45.415269   26955 main.go:141] libmachine: (multinode-600483)     <boot dev='cdrom'/>
	I1101 00:07:45.415282   26955 main.go:141] libmachine: (multinode-600483)     <boot dev='hd'/>
	I1101 00:07:45.415296   26955 main.go:141] libmachine: (multinode-600483)     <bootmenu enable='no'/>
	I1101 00:07:45.415312   26955 main.go:141] libmachine: (multinode-600483)   </os>
	I1101 00:07:45.415325   26955 main.go:141] libmachine: (multinode-600483)   <devices>
	I1101 00:07:45.415335   26955 main.go:141] libmachine: (multinode-600483)     <disk type='file' device='cdrom'>
	I1101 00:07:45.415346   26955 main.go:141] libmachine: (multinode-600483)       <source file='/home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483/boot2docker.iso'/>
	I1101 00:07:45.415361   26955 main.go:141] libmachine: (multinode-600483)       <target dev='hdc' bus='scsi'/>
	I1101 00:07:45.415376   26955 main.go:141] libmachine: (multinode-600483)       <readonly/>
	I1101 00:07:45.415388   26955 main.go:141] libmachine: (multinode-600483)     </disk>
	I1101 00:07:45.415402   26955 main.go:141] libmachine: (multinode-600483)     <disk type='file' device='disk'>
	I1101 00:07:45.415416   26955 main.go:141] libmachine: (multinode-600483)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1101 00:07:45.415428   26955 main.go:141] libmachine: (multinode-600483)       <source file='/home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483/multinode-600483.rawdisk'/>
	I1101 00:07:45.415441   26955 main.go:141] libmachine: (multinode-600483)       <target dev='hda' bus='virtio'/>
	I1101 00:07:45.415456   26955 main.go:141] libmachine: (multinode-600483)     </disk>
	I1101 00:07:45.415476   26955 main.go:141] libmachine: (multinode-600483)     <interface type='network'>
	I1101 00:07:45.415491   26955 main.go:141] libmachine: (multinode-600483)       <source network='mk-multinode-600483'/>
	I1101 00:07:45.415503   26955 main.go:141] libmachine: (multinode-600483)       <model type='virtio'/>
	I1101 00:07:45.415512   26955 main.go:141] libmachine: (multinode-600483)     </interface>
	I1101 00:07:45.415521   26955 main.go:141] libmachine: (multinode-600483)     <interface type='network'>
	I1101 00:07:45.415536   26955 main.go:141] libmachine: (multinode-600483)       <source network='default'/>
	I1101 00:07:45.415552   26955 main.go:141] libmachine: (multinode-600483)       <model type='virtio'/>
	I1101 00:07:45.415566   26955 main.go:141] libmachine: (multinode-600483)     </interface>
	I1101 00:07:45.415578   26955 main.go:141] libmachine: (multinode-600483)     <serial type='pty'>
	I1101 00:07:45.415591   26955 main.go:141] libmachine: (multinode-600483)       <target port='0'/>
	I1101 00:07:45.415599   26955 main.go:141] libmachine: (multinode-600483)     </serial>
	I1101 00:07:45.415608   26955 main.go:141] libmachine: (multinode-600483)     <console type='pty'>
	I1101 00:07:45.415621   26955 main.go:141] libmachine: (multinode-600483)       <target type='serial' port='0'/>
	I1101 00:07:45.415634   26955 main.go:141] libmachine: (multinode-600483)     </console>
	I1101 00:07:45.415660   26955 main.go:141] libmachine: (multinode-600483)     <rng model='virtio'>
	I1101 00:07:45.415677   26955 main.go:141] libmachine: (multinode-600483)       <backend model='random'>/dev/random</backend>
	I1101 00:07:45.415686   26955 main.go:141] libmachine: (multinode-600483)     </rng>
	I1101 00:07:45.415693   26955 main.go:141] libmachine: (multinode-600483)     
	I1101 00:07:45.415713   26955 main.go:141] libmachine: (multinode-600483)     
	I1101 00:07:45.415725   26955 main.go:141] libmachine: (multinode-600483)   </devices>
	I1101 00:07:45.415737   26955 main.go:141] libmachine: (multinode-600483) </domain>
	I1101 00:07:45.415748   26955 main.go:141] libmachine: (multinode-600483) 
	I1101 00:07:45.419696   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:b9:32:cb in network default
	I1101 00:07:45.420238   26955 main.go:141] libmachine: (multinode-600483) Ensuring networks are active...
	I1101 00:07:45.420265   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:07:45.420944   26955 main.go:141] libmachine: (multinode-600483) Ensuring network default is active
	I1101 00:07:45.421215   26955 main.go:141] libmachine: (multinode-600483) Ensuring network mk-multinode-600483 is active
	I1101 00:07:45.421739   26955 main.go:141] libmachine: (multinode-600483) Getting domain xml...
	I1101 00:07:45.422458   26955 main.go:141] libmachine: (multinode-600483) Creating domain...
	I1101 00:07:46.637375   26955 main.go:141] libmachine: (multinode-600483) Waiting to get IP...
	I1101 00:07:46.638253   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:07:46.638662   26955 main.go:141] libmachine: (multinode-600483) DBG | unable to find current IP address of domain multinode-600483 in network mk-multinode-600483
	I1101 00:07:46.638688   26955 main.go:141] libmachine: (multinode-600483) DBG | I1101 00:07:46.638647   26978 retry.go:31] will retry after 233.579167ms: waiting for machine to come up
	I1101 00:07:46.874308   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:07:46.874725   26955 main.go:141] libmachine: (multinode-600483) DBG | unable to find current IP address of domain multinode-600483 in network mk-multinode-600483
	I1101 00:07:46.874769   26955 main.go:141] libmachine: (multinode-600483) DBG | I1101 00:07:46.874688   26978 retry.go:31] will retry after 337.664943ms: waiting for machine to come up
	I1101 00:07:47.214190   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:07:47.214643   26955 main.go:141] libmachine: (multinode-600483) DBG | unable to find current IP address of domain multinode-600483 in network mk-multinode-600483
	I1101 00:07:47.214948   26955 main.go:141] libmachine: (multinode-600483) DBG | I1101 00:07:47.214580   26978 retry.go:31] will retry after 461.884363ms: waiting for machine to come up
	I1101 00:07:47.678109   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:07:47.678584   26955 main.go:141] libmachine: (multinode-600483) DBG | unable to find current IP address of domain multinode-600483 in network mk-multinode-600483
	I1101 00:07:47.678607   26955 main.go:141] libmachine: (multinode-600483) DBG | I1101 00:07:47.678538   26978 retry.go:31] will retry after 602.39549ms: waiting for machine to come up
	I1101 00:07:48.282366   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:07:48.282826   26955 main.go:141] libmachine: (multinode-600483) DBG | unable to find current IP address of domain multinode-600483 in network mk-multinode-600483
	I1101 00:07:48.282872   26955 main.go:141] libmachine: (multinode-600483) DBG | I1101 00:07:48.282751   26978 retry.go:31] will retry after 695.891182ms: waiting for machine to come up
	I1101 00:07:48.980704   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:07:48.981073   26955 main.go:141] libmachine: (multinode-600483) DBG | unable to find current IP address of domain multinode-600483 in network mk-multinode-600483
	I1101 00:07:48.981104   26955 main.go:141] libmachine: (multinode-600483) DBG | I1101 00:07:48.981019   26978 retry.go:31] will retry after 929.041484ms: waiting for machine to come up
	I1101 00:07:49.912223   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:07:49.912619   26955 main.go:141] libmachine: (multinode-600483) DBG | unable to find current IP address of domain multinode-600483 in network mk-multinode-600483
	I1101 00:07:49.912644   26955 main.go:141] libmachine: (multinode-600483) DBG | I1101 00:07:49.912571   26978 retry.go:31] will retry after 786.079351ms: waiting for machine to come up
	I1101 00:07:50.699810   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:07:50.700201   26955 main.go:141] libmachine: (multinode-600483) DBG | unable to find current IP address of domain multinode-600483 in network mk-multinode-600483
	I1101 00:07:50.700233   26955 main.go:141] libmachine: (multinode-600483) DBG | I1101 00:07:50.700153   26978 retry.go:31] will retry after 1.214466314s: waiting for machine to come up
	I1101 00:07:51.916496   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:07:51.916902   26955 main.go:141] libmachine: (multinode-600483) DBG | unable to find current IP address of domain multinode-600483 in network mk-multinode-600483
	I1101 00:07:51.916928   26955 main.go:141] libmachine: (multinode-600483) DBG | I1101 00:07:51.916871   26978 retry.go:31] will retry after 1.394992829s: waiting for machine to come up
	I1101 00:07:53.313447   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:07:53.313830   26955 main.go:141] libmachine: (multinode-600483) DBG | unable to find current IP address of domain multinode-600483 in network mk-multinode-600483
	I1101 00:07:53.313860   26955 main.go:141] libmachine: (multinode-600483) DBG | I1101 00:07:53.313776   26978 retry.go:31] will retry after 2.226594152s: waiting for machine to come up
	I1101 00:07:55.542382   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:07:55.542931   26955 main.go:141] libmachine: (multinode-600483) DBG | unable to find current IP address of domain multinode-600483 in network mk-multinode-600483
	I1101 00:07:55.542965   26955 main.go:141] libmachine: (multinode-600483) DBG | I1101 00:07:55.542881   26978 retry.go:31] will retry after 2.045737592s: waiting for machine to come up
	I1101 00:07:57.591182   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:07:57.591629   26955 main.go:141] libmachine: (multinode-600483) DBG | unable to find current IP address of domain multinode-600483 in network mk-multinode-600483
	I1101 00:07:57.591659   26955 main.go:141] libmachine: (multinode-600483) DBG | I1101 00:07:57.591569   26978 retry.go:31] will retry after 2.992195343s: waiting for machine to come up
	I1101 00:08:00.585216   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:00.585583   26955 main.go:141] libmachine: (multinode-600483) DBG | unable to find current IP address of domain multinode-600483 in network mk-multinode-600483
	I1101 00:08:00.585606   26955 main.go:141] libmachine: (multinode-600483) DBG | I1101 00:08:00.585535   26978 retry.go:31] will retry after 3.817319823s: waiting for machine to come up
	I1101 00:08:04.407532   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:04.407986   26955 main.go:141] libmachine: (multinode-600483) DBG | unable to find current IP address of domain multinode-600483 in network mk-multinode-600483
	I1101 00:08:04.408012   26955 main.go:141] libmachine: (multinode-600483) DBG | I1101 00:08:04.407962   26978 retry.go:31] will retry after 3.569337484s: waiting for machine to come up
	I1101 00:08:07.980608   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:07.981098   26955 main.go:141] libmachine: (multinode-600483) Found IP for machine: 192.168.39.130
	I1101 00:08:07.981129   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has current primary IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:07.981138   26955 main.go:141] libmachine: (multinode-600483) Reserving static IP address...
	I1101 00:08:07.981568   26955 main.go:141] libmachine: (multinode-600483) DBG | unable to find host DHCP lease matching {name: "multinode-600483", mac: "52:54:00:80:59:53", ip: "192.168.39.130"} in network mk-multinode-600483
	I1101 00:08:08.056509   26955 main.go:141] libmachine: (multinode-600483) DBG | Getting to WaitForSSH function...
	I1101 00:08:08.056543   26955 main.go:141] libmachine: (multinode-600483) Reserved static IP address: 192.168.39.130
	I1101 00:08:08.056558   26955 main.go:141] libmachine: (multinode-600483) Waiting for SSH to be available...
	I1101 00:08:08.059039   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:08.059379   26955 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:minikube Clientid:01:52:54:00:80:59:53}
	I1101 00:08:08.059419   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:08.059490   26955 main.go:141] libmachine: (multinode-600483) DBG | Using SSH client type: external
	I1101 00:08:08.059548   26955 main.go:141] libmachine: (multinode-600483) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483/id_rsa (-rw-------)
	I1101 00:08:08.059589   26955 main.go:141] libmachine: (multinode-600483) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.130 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1101 00:08:08.059605   26955 main.go:141] libmachine: (multinode-600483) DBG | About to run SSH command:
	I1101 00:08:08.059619   26955 main.go:141] libmachine: (multinode-600483) DBG | exit 0
	I1101 00:08:08.143735   26955 main.go:141] libmachine: (multinode-600483) DBG | SSH cmd err, output: <nil>: 
	I1101 00:08:08.144108   26955 main.go:141] libmachine: (multinode-600483) KVM machine creation complete!
	I1101 00:08:08.144398   26955 main.go:141] libmachine: (multinode-600483) Calling .GetConfigRaw
	I1101 00:08:08.145017   26955 main.go:141] libmachine: (multinode-600483) Calling .DriverName
	I1101 00:08:08.145216   26955 main.go:141] libmachine: (multinode-600483) Calling .DriverName
	I1101 00:08:08.145376   26955 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1101 00:08:08.145391   26955 main.go:141] libmachine: (multinode-600483) Calling .GetState
	I1101 00:08:08.146715   26955 main.go:141] libmachine: Detecting operating system of created instance...
	I1101 00:08:08.146732   26955 main.go:141] libmachine: Waiting for SSH to be available...
	I1101 00:08:08.146742   26955 main.go:141] libmachine: Getting to WaitForSSH function...
	I1101 00:08:08.146752   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHHostname
	I1101 00:08:08.148945   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:08.149366   26955 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:08:08.149435   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:08.149549   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHPort
	I1101 00:08:08.149713   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:08:08.149876   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:08:08.150052   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHUsername
	I1101 00:08:08.150235   26955 main.go:141] libmachine: Using SSH client type: native
	I1101 00:08:08.150845   26955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.130 22 <nil> <nil>}
	I1101 00:08:08.150865   26955 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1101 00:08:08.259067   26955 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 00:08:08.259098   26955 main.go:141] libmachine: Detecting the provisioner...
	I1101 00:08:08.259111   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHHostname
	I1101 00:08:08.261941   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:08.262279   26955 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:08:08.262314   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:08.262515   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHPort
	I1101 00:08:08.262740   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:08:08.262896   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:08:08.263074   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHUsername
	I1101 00:08:08.263314   26955 main.go:141] libmachine: Using SSH client type: native
	I1101 00:08:08.263712   26955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.130 22 <nil> <nil>}
	I1101 00:08:08.263729   26955 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1101 00:08:08.372313   26955 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g0cee705-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1101 00:08:08.372374   26955 main.go:141] libmachine: found compatible host: buildroot
	I1101 00:08:08.372384   26955 main.go:141] libmachine: Provisioning with buildroot...
	I1101 00:08:08.372401   26955 main.go:141] libmachine: (multinode-600483) Calling .GetMachineName
	I1101 00:08:08.372679   26955 buildroot.go:166] provisioning hostname "multinode-600483"
	I1101 00:08:08.372700   26955 main.go:141] libmachine: (multinode-600483) Calling .GetMachineName
	I1101 00:08:08.372875   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHHostname
	I1101 00:08:08.375582   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:08.375989   26955 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:08:08.376017   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:08.376199   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHPort
	I1101 00:08:08.376393   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:08:08.376564   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:08:08.376730   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHUsername
	I1101 00:08:08.376895   26955 main.go:141] libmachine: Using SSH client type: native
	I1101 00:08:08.377245   26955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.130 22 <nil> <nil>}
	I1101 00:08:08.377263   26955 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-600483 && echo "multinode-600483" | sudo tee /etc/hostname
	I1101 00:08:08.501089   26955 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-600483
	
	I1101 00:08:08.501119   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHHostname
	I1101 00:08:08.504228   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:08.504712   26955 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:08:08.504742   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:08.504981   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHPort
	I1101 00:08:08.505206   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:08:08.505381   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:08:08.505538   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHUsername
	I1101 00:08:08.505710   26955 main.go:141] libmachine: Using SSH client type: native
	I1101 00:08:08.506043   26955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.130 22 <nil> <nil>}
	I1101 00:08:08.506062   26955 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-600483' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-600483/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-600483' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 00:08:08.623854   26955 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 00:08:08.623879   26955 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7305/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7305/.minikube}
	I1101 00:08:08.623910   26955 buildroot.go:174] setting up certificates
	I1101 00:08:08.623921   26955 provision.go:83] configureAuth start
	I1101 00:08:08.623954   26955 main.go:141] libmachine: (multinode-600483) Calling .GetMachineName
	I1101 00:08:08.624275   26955 main.go:141] libmachine: (multinode-600483) Calling .GetIP
	I1101 00:08:08.626910   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:08.627279   26955 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:08:08.627309   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:08.627452   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHHostname
	I1101 00:08:08.629561   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:08.629886   26955 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:08:08.629915   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:08.630052   26955 provision.go:138] copyHostCerts
	I1101 00:08:08.630081   26955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 00:08:08.630115   26955 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem, removing ...
	I1101 00:08:08.630127   26955 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 00:08:08.630196   26955 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem (1675 bytes)
	I1101 00:08:08.630288   26955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 00:08:08.630309   26955 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem, removing ...
	I1101 00:08:08.630316   26955 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 00:08:08.630335   26955 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem (1078 bytes)
	I1101 00:08:08.630376   26955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 00:08:08.630391   26955 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem, removing ...
	I1101 00:08:08.630397   26955 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 00:08:08.630413   26955 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem (1123 bytes)
	I1101 00:08:08.630455   26955 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem org=jenkins.multinode-600483 san=[192.168.39.130 192.168.39.130 localhost 127.0.0.1 minikube multinode-600483]
	I1101 00:08:08.944716   26955 provision.go:172] copyRemoteCerts
	I1101 00:08:08.944772   26955 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 00:08:08.944798   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHHostname
	I1101 00:08:08.948155   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:08.948571   26955 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:08:08.948608   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:08.948883   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHPort
	I1101 00:08:08.949138   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:08:08.949435   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHUsername
	I1101 00:08:08.949651   26955 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483/id_rsa Username:docker}
	I1101 00:08:09.032852   26955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1101 00:08:09.032926   26955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 00:08:09.054815   26955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1101 00:08:09.054875   26955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1101 00:08:09.077949   26955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1101 00:08:09.078040   26955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 00:08:09.100533   26955 provision.go:86] duration metric: configureAuth took 476.599142ms
	I1101 00:08:09.100558   26955 buildroot.go:189] setting minikube options for container-runtime
	I1101 00:08:09.100733   26955 config.go:182] Loaded profile config "multinode-600483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:08:09.100804   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHHostname
	I1101 00:08:09.103445   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:09.103807   26955 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:08:09.103833   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:09.103990   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHPort
	I1101 00:08:09.104219   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:08:09.104384   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:08:09.104556   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHUsername
	I1101 00:08:09.104733   26955 main.go:141] libmachine: Using SSH client type: native
	I1101 00:08:09.105064   26955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.130 22 <nil> <nil>}
	I1101 00:08:09.105080   26955 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 00:08:09.402784   26955 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 00:08:09.402816   26955 main.go:141] libmachine: Checking connection to Docker...
	I1101 00:08:09.402827   26955 main.go:141] libmachine: (multinode-600483) Calling .GetURL
	I1101 00:08:09.404189   26955 main.go:141] libmachine: (multinode-600483) DBG | Using libvirt version 6000000
	I1101 00:08:09.406525   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:09.406949   26955 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:08:09.406983   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:09.407160   26955 main.go:141] libmachine: Docker is up and running!
	I1101 00:08:09.407196   26955 main.go:141] libmachine: Reticulating splines...
	I1101 00:08:09.407204   26955 client.go:171] LocalClient.Create took 24.323994293s
	I1101 00:08:09.407229   26955 start.go:167] duration metric: libmachine.API.Create for "multinode-600483" took 24.324068859s
	I1101 00:08:09.407242   26955 start.go:300] post-start starting for "multinode-600483" (driver="kvm2")
	I1101 00:08:09.407253   26955 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 00:08:09.407290   26955 main.go:141] libmachine: (multinode-600483) Calling .DriverName
	I1101 00:08:09.407565   26955 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 00:08:09.407592   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHHostname
	I1101 00:08:09.409561   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:09.409898   26955 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:08:09.409930   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:09.410080   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHPort
	I1101 00:08:09.410289   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:08:09.410453   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHUsername
	I1101 00:08:09.410609   26955 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483/id_rsa Username:docker}
	I1101 00:08:09.494280   26955 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 00:08:09.498333   26955 command_runner.go:130] > NAME=Buildroot
	I1101 00:08:09.498364   26955 command_runner.go:130] > VERSION=2021.02.12-1-g0cee705-dirty
	I1101 00:08:09.498372   26955 command_runner.go:130] > ID=buildroot
	I1101 00:08:09.498381   26955 command_runner.go:130] > VERSION_ID=2021.02.12
	I1101 00:08:09.498390   26955 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1101 00:08:09.498429   26955 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 00:08:09.498445   26955 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/addons for local assets ...
	I1101 00:08:09.498504   26955 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/files for local assets ...
	I1101 00:08:09.498583   26955 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> 145042.pem in /etc/ssl/certs
	I1101 00:08:09.498595   26955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> /etc/ssl/certs/145042.pem
	I1101 00:08:09.498704   26955 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 00:08:09.507899   26955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /etc/ssl/certs/145042.pem (1708 bytes)
	I1101 00:08:09.529932   26955 start.go:303] post-start completed in 122.671841ms
	I1101 00:08:09.529992   26955 main.go:141] libmachine: (multinode-600483) Calling .GetConfigRaw
	I1101 00:08:09.530567   26955 main.go:141] libmachine: (multinode-600483) Calling .GetIP
	I1101 00:08:09.532891   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:09.533410   26955 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:08:09.533446   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:09.533724   26955 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/config.json ...
	I1101 00:08:09.533904   26955 start.go:128] duration metric: createHost completed in 24.468850896s
	I1101 00:08:09.533925   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHHostname
	I1101 00:08:09.536207   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:09.536518   26955 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:08:09.536556   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:09.536675   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHPort
	I1101 00:08:09.536889   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:08:09.537050   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:08:09.537169   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHUsername
	I1101 00:08:09.537354   26955 main.go:141] libmachine: Using SSH client type: native
	I1101 00:08:09.537819   26955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.130 22 <nil> <nil>}
	I1101 00:08:09.537837   26955 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1101 00:08:09.648651   26955 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698797289.620405028
	
	I1101 00:08:09.648673   26955 fix.go:206] guest clock: 1698797289.620405028
	I1101 00:08:09.648679   26955 fix.go:219] Guest: 2023-11-01 00:08:09.620405028 +0000 UTC Remote: 2023-11-01 00:08:09.533915511 +0000 UTC m=+24.592099801 (delta=86.489517ms)
	I1101 00:08:09.648726   26955 fix.go:190] guest clock delta is within tolerance: 86.489517ms
	I1101 00:08:09.648734   26955 start.go:83] releasing machines lock for "multinode-600483", held for 24.583766812s
	I1101 00:08:09.648759   26955 main.go:141] libmachine: (multinode-600483) Calling .DriverName
	I1101 00:08:09.649041   26955 main.go:141] libmachine: (multinode-600483) Calling .GetIP
	I1101 00:08:09.651676   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:09.652070   26955 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:08:09.652100   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:09.652261   26955 main.go:141] libmachine: (multinode-600483) Calling .DriverName
	I1101 00:08:09.652747   26955 main.go:141] libmachine: (multinode-600483) Calling .DriverName
	I1101 00:08:09.652921   26955 main.go:141] libmachine: (multinode-600483) Calling .DriverName
	I1101 00:08:09.653003   26955 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 00:08:09.653044   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHHostname
	I1101 00:08:09.653124   26955 ssh_runner.go:195] Run: cat /version.json
	I1101 00:08:09.653154   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHHostname
	I1101 00:08:09.656541   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:09.657385   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:09.657569   26955 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:08:09.657601   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:09.657766   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHPort
	I1101 00:08:09.657857   26955 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:08:09.657889   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:09.657977   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:08:09.658053   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHPort
	I1101 00:08:09.658200   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHUsername
	I1101 00:08:09.658212   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:08:09.658374   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHUsername
	I1101 00:08:09.658374   26955 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483/id_rsa Username:docker}
	I1101 00:08:09.658544   26955 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483/id_rsa Username:docker}
	I1101 00:08:09.772169   26955 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1101 00:08:09.772236   26955 command_runner.go:130] > {"iso_version": "v1.32.0-1698773592-17486", "kicbase_version": "v0.0.41-1698660445-17527", "minikube_version": "v1.32.0-beta.0", "commit": "01e1cff766666ed9b9dd97c2a32d71cdb94ff3cf"}
	I1101 00:08:09.772366   26955 ssh_runner.go:195] Run: systemctl --version
	I1101 00:08:09.778012   26955 command_runner.go:130] > systemd 247 (247)
	I1101 00:08:09.778125   26955 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1101 00:08:09.778302   26955 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 00:08:09.937901   26955 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1101 00:08:09.943370   26955 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1101 00:08:09.943403   26955 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 00:08:09.943468   26955 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 00:08:09.958553   26955 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1101 00:08:09.958842   26955 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 00:08:09.958859   26955 start.go:472] detecting cgroup driver to use...
	I1101 00:08:09.958910   26955 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 00:08:09.974061   26955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 00:08:09.988629   26955 docker.go:204] disabling cri-docker service (if available) ...
	I1101 00:08:09.988688   26955 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 00:08:10.003843   26955 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 00:08:10.019126   26955 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 00:08:10.033557   26955 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1101 00:08:10.126973   26955 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 00:08:10.255879   26955 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1101 00:08:10.255920   26955 docker.go:220] disabling docker service ...
	I1101 00:08:10.255985   26955 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 00:08:10.271335   26955 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 00:08:10.282879   26955 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1101 00:08:10.283082   26955 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 00:08:10.389820   26955 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1101 00:08:10.389923   26955 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 00:08:10.402172   26955 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1101 00:08:10.402493   26955 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1101 00:08:10.494501   26955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 00:08:10.506627   26955 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 00:08:10.522787   26955 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1101 00:08:10.522829   26955 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 00:08:10.522898   26955 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:08:10.532354   26955 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 00:08:10.532431   26955 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:08:10.541810   26955 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:08:10.551383   26955 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:08:10.561107   26955 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 00:08:10.570805   26955 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 00:08:10.579398   26955 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 00:08:10.579443   26955 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 00:08:10.579494   26955 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 00:08:10.592376   26955 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 00:08:10.600641   26955 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:08:10.716170   26955 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 00:08:10.885872   26955 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 00:08:10.885950   26955 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 00:08:10.890873   26955 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1101 00:08:10.890901   26955 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1101 00:08:10.890918   26955 command_runner.go:130] > Device: 16h/22d	Inode: 733         Links: 1
	I1101 00:08:10.890936   26955 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1101 00:08:10.890948   26955 command_runner.go:130] > Access: 2023-11-01 00:08:10.846110617 +0000
	I1101 00:08:10.890956   26955 command_runner.go:130] > Modify: 2023-11-01 00:08:10.846110617 +0000
	I1101 00:08:10.890968   26955 command_runner.go:130] > Change: 2023-11-01 00:08:10.846110617 +0000
	I1101 00:08:10.890976   26955 command_runner.go:130] >  Birth: -
	I1101 00:08:10.891002   26955 start.go:540] Will wait 60s for crictl version
	I1101 00:08:10.891052   26955 ssh_runner.go:195] Run: which crictl
	I1101 00:08:10.894623   26955 command_runner.go:130] > /usr/bin/crictl
	I1101 00:08:10.894738   26955 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 00:08:10.932930   26955 command_runner.go:130] > Version:  0.1.0
	I1101 00:08:10.932956   26955 command_runner.go:130] > RuntimeName:  cri-o
	I1101 00:08:10.932964   26955 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1101 00:08:10.932978   26955 command_runner.go:130] > RuntimeApiVersion:  v1
	I1101 00:08:10.933445   26955 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1101 00:08:10.933539   26955 ssh_runner.go:195] Run: crio --version
	I1101 00:08:10.973621   26955 command_runner.go:130] > crio version 1.24.1
	I1101 00:08:10.973642   26955 command_runner.go:130] > Version:          1.24.1
	I1101 00:08:10.973650   26955 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1101 00:08:10.973654   26955 command_runner.go:130] > GitTreeState:     dirty
	I1101 00:08:10.973659   26955 command_runner.go:130] > BuildDate:        2023-10-31T22:57:11Z
	I1101 00:08:10.973664   26955 command_runner.go:130] > GoVersion:        go1.19.9
	I1101 00:08:10.973668   26955 command_runner.go:130] > Compiler:         gc
	I1101 00:08:10.973672   26955 command_runner.go:130] > Platform:         linux/amd64
	I1101 00:08:10.973678   26955 command_runner.go:130] > Linkmode:         dynamic
	I1101 00:08:10.973685   26955 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1101 00:08:10.973690   26955 command_runner.go:130] > SeccompEnabled:   true
	I1101 00:08:10.973714   26955 command_runner.go:130] > AppArmorEnabled:  false
	I1101 00:08:10.973778   26955 ssh_runner.go:195] Run: crio --version
	I1101 00:08:11.019757   26955 command_runner.go:130] > crio version 1.24.1
	I1101 00:08:11.019776   26955 command_runner.go:130] > Version:          1.24.1
	I1101 00:08:11.019784   26955 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1101 00:08:11.019788   26955 command_runner.go:130] > GitTreeState:     dirty
	I1101 00:08:11.019794   26955 command_runner.go:130] > BuildDate:        2023-10-31T22:57:11Z
	I1101 00:08:11.019799   26955 command_runner.go:130] > GoVersion:        go1.19.9
	I1101 00:08:11.019803   26955 command_runner.go:130] > Compiler:         gc
	I1101 00:08:11.019807   26955 command_runner.go:130] > Platform:         linux/amd64
	I1101 00:08:11.019813   26955 command_runner.go:130] > Linkmode:         dynamic
	I1101 00:08:11.019820   26955 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1101 00:08:11.019825   26955 command_runner.go:130] > SeccompEnabled:   true
	I1101 00:08:11.019829   26955 command_runner.go:130] > AppArmorEnabled:  false
	I1101 00:08:11.023268   26955 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1101 00:08:11.024729   26955 main.go:141] libmachine: (multinode-600483) Calling .GetIP
	I1101 00:08:11.027241   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:11.027556   26955 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:08:11.027581   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:11.027776   26955 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1101 00:08:11.031607   26955 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 00:08:11.043826   26955 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 00:08:11.043880   26955 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 00:08:11.077561   26955 command_runner.go:130] > {
	I1101 00:08:11.077580   26955 command_runner.go:130] >   "images": [
	I1101 00:08:11.077584   26955 command_runner.go:130] >   ]
	I1101 00:08:11.077587   26955 command_runner.go:130] > }
	I1101 00:08:11.077684   26955 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1101 00:08:11.077736   26955 ssh_runner.go:195] Run: which lz4
	I1101 00:08:11.081412   26955 command_runner.go:130] > /usr/bin/lz4
	I1101 00:08:11.081446   26955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1101 00:08:11.081534   26955 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1101 00:08:11.085583   26955 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 00:08:11.085612   26955 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 00:08:11.085634   26955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1101 00:08:12.695403   26955 crio.go:444] Took 1.613903 seconds to copy over tarball
	I1101 00:08:12.695473   26955 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 00:08:15.611801   26955 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.916300835s)
	I1101 00:08:15.611830   26955 crio.go:451] Took 2.916405 seconds to extract the tarball
	I1101 00:08:15.611838   26955 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 00:08:15.652359   26955 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 00:08:15.728288   26955 command_runner.go:130] > {
	I1101 00:08:15.728309   26955 command_runner.go:130] >   "images": [
	I1101 00:08:15.728314   26955 command_runner.go:130] >     {
	I1101 00:08:15.728322   26955 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1101 00:08:15.728327   26955 command_runner.go:130] >       "repoTags": [
	I1101 00:08:15.728333   26955 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1101 00:08:15.728336   26955 command_runner.go:130] >       ],
	I1101 00:08:15.728340   26955 command_runner.go:130] >       "repoDigests": [
	I1101 00:08:15.728349   26955 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1101 00:08:15.728356   26955 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1101 00:08:15.728360   26955 command_runner.go:130] >       ],
	I1101 00:08:15.728364   26955 command_runner.go:130] >       "size": "65258016",
	I1101 00:08:15.728378   26955 command_runner.go:130] >       "uid": null,
	I1101 00:08:15.728392   26955 command_runner.go:130] >       "username": "",
	I1101 00:08:15.728403   26955 command_runner.go:130] >       "spec": null,
	I1101 00:08:15.728411   26955 command_runner.go:130] >       "pinned": false
	I1101 00:08:15.728415   26955 command_runner.go:130] >     },
	I1101 00:08:15.728419   26955 command_runner.go:130] >     {
	I1101 00:08:15.728425   26955 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1101 00:08:15.728432   26955 command_runner.go:130] >       "repoTags": [
	I1101 00:08:15.728438   26955 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1101 00:08:15.728445   26955 command_runner.go:130] >       ],
	I1101 00:08:15.728450   26955 command_runner.go:130] >       "repoDigests": [
	I1101 00:08:15.728460   26955 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1101 00:08:15.728474   26955 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1101 00:08:15.728483   26955 command_runner.go:130] >       ],
	I1101 00:08:15.728492   26955 command_runner.go:130] >       "size": "31470524",
	I1101 00:08:15.728500   26955 command_runner.go:130] >       "uid": null,
	I1101 00:08:15.728504   26955 command_runner.go:130] >       "username": "",
	I1101 00:08:15.728512   26955 command_runner.go:130] >       "spec": null,
	I1101 00:08:15.728520   26955 command_runner.go:130] >       "pinned": false
	I1101 00:08:15.728528   26955 command_runner.go:130] >     },
	I1101 00:08:15.728532   26955 command_runner.go:130] >     {
	I1101 00:08:15.728538   26955 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1101 00:08:15.728545   26955 command_runner.go:130] >       "repoTags": [
	I1101 00:08:15.728550   26955 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1101 00:08:15.728557   26955 command_runner.go:130] >       ],
	I1101 00:08:15.728561   26955 command_runner.go:130] >       "repoDigests": [
	I1101 00:08:15.728568   26955 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1101 00:08:15.728578   26955 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1101 00:08:15.728584   26955 command_runner.go:130] >       ],
	I1101 00:08:15.728589   26955 command_runner.go:130] >       "size": "53621675",
	I1101 00:08:15.728596   26955 command_runner.go:130] >       "uid": null,
	I1101 00:08:15.728600   26955 command_runner.go:130] >       "username": "",
	I1101 00:08:15.728607   26955 command_runner.go:130] >       "spec": null,
	I1101 00:08:15.728611   26955 command_runner.go:130] >       "pinned": false
	I1101 00:08:15.728617   26955 command_runner.go:130] >     },
	I1101 00:08:15.728621   26955 command_runner.go:130] >     {
	I1101 00:08:15.728633   26955 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1101 00:08:15.728641   26955 command_runner.go:130] >       "repoTags": [
	I1101 00:08:15.728646   26955 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1101 00:08:15.728653   26955 command_runner.go:130] >       ],
	I1101 00:08:15.728657   26955 command_runner.go:130] >       "repoDigests": [
	I1101 00:08:15.728666   26955 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1101 00:08:15.728676   26955 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1101 00:08:15.728693   26955 command_runner.go:130] >       ],
	I1101 00:08:15.728702   26955 command_runner.go:130] >       "size": "295456551",
	I1101 00:08:15.728709   26955 command_runner.go:130] >       "uid": {
	I1101 00:08:15.728714   26955 command_runner.go:130] >         "value": "0"
	I1101 00:08:15.728720   26955 command_runner.go:130] >       },
	I1101 00:08:15.728724   26955 command_runner.go:130] >       "username": "",
	I1101 00:08:15.728731   26955 command_runner.go:130] >       "spec": null,
	I1101 00:08:15.728735   26955 command_runner.go:130] >       "pinned": false
	I1101 00:08:15.728742   26955 command_runner.go:130] >     },
	I1101 00:08:15.728746   26955 command_runner.go:130] >     {
	I1101 00:08:15.728754   26955 command_runner.go:130] >       "id": "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076",
	I1101 00:08:15.728764   26955 command_runner.go:130] >       "repoTags": [
	I1101 00:08:15.728772   26955 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.3"
	I1101 00:08:15.728777   26955 command_runner.go:130] >       ],
	I1101 00:08:15.728781   26955 command_runner.go:130] >       "repoDigests": [
	I1101 00:08:15.728791   26955 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab",
	I1101 00:08:15.728801   26955 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"
	I1101 00:08:15.728807   26955 command_runner.go:130] >       ],
	I1101 00:08:15.728812   26955 command_runner.go:130] >       "size": "127165392",
	I1101 00:08:15.728818   26955 command_runner.go:130] >       "uid": {
	I1101 00:08:15.728823   26955 command_runner.go:130] >         "value": "0"
	I1101 00:08:15.728829   26955 command_runner.go:130] >       },
	I1101 00:08:15.728833   26955 command_runner.go:130] >       "username": "",
	I1101 00:08:15.728838   26955 command_runner.go:130] >       "spec": null,
	I1101 00:08:15.728845   26955 command_runner.go:130] >       "pinned": false
	I1101 00:08:15.728849   26955 command_runner.go:130] >     },
	I1101 00:08:15.728855   26955 command_runner.go:130] >     {
	I1101 00:08:15.728863   26955 command_runner.go:130] >       "id": "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3",
	I1101 00:08:15.728871   26955 command_runner.go:130] >       "repoTags": [
	I1101 00:08:15.728880   26955 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.3"
	I1101 00:08:15.728887   26955 command_runner.go:130] >       ],
	I1101 00:08:15.728891   26955 command_runner.go:130] >       "repoDigests": [
	I1101 00:08:15.728901   26955 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707",
	I1101 00:08:15.728912   26955 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d"
	I1101 00:08:15.728918   26955 command_runner.go:130] >       ],
	I1101 00:08:15.728923   26955 command_runner.go:130] >       "size": "123188534",
	I1101 00:08:15.728927   26955 command_runner.go:130] >       "uid": {
	I1101 00:08:15.728934   26955 command_runner.go:130] >         "value": "0"
	I1101 00:08:15.728938   26955 command_runner.go:130] >       },
	I1101 00:08:15.728945   26955 command_runner.go:130] >       "username": "",
	I1101 00:08:15.728949   26955 command_runner.go:130] >       "spec": null,
	I1101 00:08:15.728956   26955 command_runner.go:130] >       "pinned": false
	I1101 00:08:15.728960   26955 command_runner.go:130] >     },
	I1101 00:08:15.728966   26955 command_runner.go:130] >     {
	I1101 00:08:15.728972   26955 command_runner.go:130] >       "id": "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf",
	I1101 00:08:15.728979   26955 command_runner.go:130] >       "repoTags": [
	I1101 00:08:15.728984   26955 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.3"
	I1101 00:08:15.728993   26955 command_runner.go:130] >       ],
	I1101 00:08:15.729000   26955 command_runner.go:130] >       "repoDigests": [
	I1101 00:08:15.729007   26955 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8",
	I1101 00:08:15.729017   26955 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"
	I1101 00:08:15.729021   26955 command_runner.go:130] >       ],
	I1101 00:08:15.729028   26955 command_runner.go:130] >       "size": "74691991",
	I1101 00:08:15.729032   26955 command_runner.go:130] >       "uid": null,
	I1101 00:08:15.729039   26955 command_runner.go:130] >       "username": "",
	I1101 00:08:15.729043   26955 command_runner.go:130] >       "spec": null,
	I1101 00:08:15.729050   26955 command_runner.go:130] >       "pinned": false
	I1101 00:08:15.729054   26955 command_runner.go:130] >     },
	I1101 00:08:15.729060   26955 command_runner.go:130] >     {
	I1101 00:08:15.729067   26955 command_runner.go:130] >       "id": "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4",
	I1101 00:08:15.729074   26955 command_runner.go:130] >       "repoTags": [
	I1101 00:08:15.729079   26955 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.3"
	I1101 00:08:15.729085   26955 command_runner.go:130] >       ],
	I1101 00:08:15.729090   26955 command_runner.go:130] >       "repoDigests": [
	I1101 00:08:15.729136   26955 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725",
	I1101 00:08:15.729151   26955 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374"
	I1101 00:08:15.729156   26955 command_runner.go:130] >       ],
	I1101 00:08:15.729160   26955 command_runner.go:130] >       "size": "61498678",
	I1101 00:08:15.729164   26955 command_runner.go:130] >       "uid": {
	I1101 00:08:15.729172   26955 command_runner.go:130] >         "value": "0"
	I1101 00:08:15.729176   26955 command_runner.go:130] >       },
	I1101 00:08:15.729183   26955 command_runner.go:130] >       "username": "",
	I1101 00:08:15.729187   26955 command_runner.go:130] >       "spec": null,
	I1101 00:08:15.729193   26955 command_runner.go:130] >       "pinned": false
	I1101 00:08:15.729197   26955 command_runner.go:130] >     },
	I1101 00:08:15.729204   26955 command_runner.go:130] >     {
	I1101 00:08:15.729210   26955 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1101 00:08:15.729217   26955 command_runner.go:130] >       "repoTags": [
	I1101 00:08:15.729222   26955 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1101 00:08:15.729228   26955 command_runner.go:130] >       ],
	I1101 00:08:15.729233   26955 command_runner.go:130] >       "repoDigests": [
	I1101 00:08:15.729242   26955 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1101 00:08:15.729252   26955 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1101 00:08:15.729259   26955 command_runner.go:130] >       ],
	I1101 00:08:15.729266   26955 command_runner.go:130] >       "size": "750414",
	I1101 00:08:15.729270   26955 command_runner.go:130] >       "uid": {
	I1101 00:08:15.729278   26955 command_runner.go:130] >         "value": "65535"
	I1101 00:08:15.729282   26955 command_runner.go:130] >       },
	I1101 00:08:15.729289   26955 command_runner.go:130] >       "username": "",
	I1101 00:08:15.729294   26955 command_runner.go:130] >       "spec": null,
	I1101 00:08:15.729301   26955 command_runner.go:130] >       "pinned": false
	I1101 00:08:15.729304   26955 command_runner.go:130] >     }
	I1101 00:08:15.729313   26955 command_runner.go:130] >   ]
	I1101 00:08:15.729320   26955 command_runner.go:130] > }
	I1101 00:08:15.729761   26955 crio.go:496] all images are preloaded for cri-o runtime.
	I1101 00:08:15.729779   26955 cache_images.go:84] Images are preloaded, skipping loading
	I1101 00:08:15.729849   26955 ssh_runner.go:195] Run: crio config
	I1101 00:08:15.787701   26955 command_runner.go:130] ! time="2023-11-01 00:08:15.767773576Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1101 00:08:15.787726   26955 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1101 00:08:15.792387   26955 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1101 00:08:15.792405   26955 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1101 00:08:15.792418   26955 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1101 00:08:15.792427   26955 command_runner.go:130] > #
	I1101 00:08:15.792443   26955 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1101 00:08:15.792464   26955 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1101 00:08:15.792476   26955 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1101 00:08:15.792485   26955 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1101 00:08:15.792492   26955 command_runner.go:130] > # reload'.
	I1101 00:08:15.792499   26955 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1101 00:08:15.792508   26955 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1101 00:08:15.792516   26955 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1101 00:08:15.792525   26955 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1101 00:08:15.792533   26955 command_runner.go:130] > [crio]
	I1101 00:08:15.792544   26955 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1101 00:08:15.792553   26955 command_runner.go:130] > # containers images, in this directory.
	I1101 00:08:15.792564   26955 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1101 00:08:15.792575   26955 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1101 00:08:15.792583   26955 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1101 00:08:15.792589   26955 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1101 00:08:15.792597   26955 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1101 00:08:15.792605   26955 command_runner.go:130] > storage_driver = "overlay"
	I1101 00:08:15.792611   26955 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1101 00:08:15.792628   26955 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1101 00:08:15.792639   26955 command_runner.go:130] > storage_option = [
	I1101 00:08:15.792652   26955 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1101 00:08:15.792661   26955 command_runner.go:130] > ]
	I1101 00:08:15.792674   26955 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1101 00:08:15.792684   26955 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1101 00:08:15.792692   26955 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1101 00:08:15.792697   26955 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1101 00:08:15.792706   26955 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1101 00:08:15.792712   26955 command_runner.go:130] > # always happen on a node reboot
	I1101 00:08:15.792718   26955 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1101 00:08:15.792733   26955 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1101 00:08:15.792747   26955 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1101 00:08:15.792767   26955 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1101 00:08:15.792783   26955 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1101 00:08:15.792796   26955 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1101 00:08:15.792812   26955 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1101 00:08:15.792821   26955 command_runner.go:130] > # internal_wipe = true
	I1101 00:08:15.792837   26955 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1101 00:08:15.792852   26955 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1101 00:08:15.792865   26955 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1101 00:08:15.792877   26955 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1101 00:08:15.792890   26955 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1101 00:08:15.792900   26955 command_runner.go:130] > [crio.api]
	I1101 00:08:15.792916   26955 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1101 00:08:15.792924   26955 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1101 00:08:15.792935   26955 command_runner.go:130] > # IP address on which the stream server will listen.
	I1101 00:08:15.792947   26955 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1101 00:08:15.792963   26955 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1101 00:08:15.792975   26955 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1101 00:08:15.792985   26955 command_runner.go:130] > # stream_port = "0"
	I1101 00:08:15.792998   26955 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1101 00:08:15.793009   26955 command_runner.go:130] > # stream_enable_tls = false
	I1101 00:08:15.793022   26955 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1101 00:08:15.793029   26955 command_runner.go:130] > # stream_idle_timeout = ""
	I1101 00:08:15.793038   26955 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1101 00:08:15.793057   26955 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1101 00:08:15.793067   26955 command_runner.go:130] > # minutes.
	I1101 00:08:15.793077   26955 command_runner.go:130] > # stream_tls_cert = ""
	I1101 00:08:15.793090   26955 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1101 00:08:15.793104   26955 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1101 00:08:15.793113   26955 command_runner.go:130] > # stream_tls_key = ""
	I1101 00:08:15.793123   26955 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1101 00:08:15.793149   26955 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1101 00:08:15.793162   26955 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1101 00:08:15.793173   26955 command_runner.go:130] > # stream_tls_ca = ""
	I1101 00:08:15.793188   26955 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1101 00:08:15.793199   26955 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1101 00:08:15.793215   26955 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1101 00:08:15.793225   26955 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1101 00:08:15.793254   26955 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1101 00:08:15.793270   26955 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1101 00:08:15.793277   26955 command_runner.go:130] > [crio.runtime]
	I1101 00:08:15.793287   26955 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1101 00:08:15.793303   26955 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1101 00:08:15.793313   26955 command_runner.go:130] > # "nofile=1024:2048"
	I1101 00:08:15.793327   26955 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1101 00:08:15.793338   26955 command_runner.go:130] > # default_ulimits = [
	I1101 00:08:15.793346   26955 command_runner.go:130] > # ]
	I1101 00:08:15.793352   26955 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1101 00:08:15.793362   26955 command_runner.go:130] > # no_pivot = false
	I1101 00:08:15.793375   26955 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1101 00:08:15.793390   26955 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1101 00:08:15.793401   26955 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1101 00:08:15.793415   26955 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1101 00:08:15.793426   26955 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1101 00:08:15.793440   26955 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1101 00:08:15.793470   26955 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1101 00:08:15.793481   26955 command_runner.go:130] > # Cgroup setting for conmon
	I1101 00:08:15.793494   26955 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1101 00:08:15.793504   26955 command_runner.go:130] > conmon_cgroup = "pod"
	I1101 00:08:15.793518   26955 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1101 00:08:15.793534   26955 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1101 00:08:15.793548   26955 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1101 00:08:15.793555   26955 command_runner.go:130] > conmon_env = [
	I1101 00:08:15.793565   26955 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1101 00:08:15.793574   26955 command_runner.go:130] > ]
	I1101 00:08:15.793587   26955 command_runner.go:130] > # Additional environment variables to set for all the
	I1101 00:08:15.793599   26955 command_runner.go:130] > # containers. These are overridden if set in the
	I1101 00:08:15.793612   26955 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1101 00:08:15.793622   26955 command_runner.go:130] > # default_env = [
	I1101 00:08:15.793631   26955 command_runner.go:130] > # ]
	I1101 00:08:15.793640   26955 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1101 00:08:15.793649   26955 command_runner.go:130] > # selinux = false
	I1101 00:08:15.793663   26955 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1101 00:08:15.793677   26955 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1101 00:08:15.793690   26955 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1101 00:08:15.793701   26955 command_runner.go:130] > # seccomp_profile = ""
	I1101 00:08:15.793714   26955 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1101 00:08:15.793727   26955 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1101 00:08:15.793744   26955 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1101 00:08:15.793755   26955 command_runner.go:130] > # which might increase security.
	I1101 00:08:15.793766   26955 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1101 00:08:15.793780   26955 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1101 00:08:15.793796   26955 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1101 00:08:15.793810   26955 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1101 00:08:15.793824   26955 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1101 00:08:15.793832   26955 command_runner.go:130] > # This option supports live configuration reload.
	I1101 00:08:15.793841   26955 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1101 00:08:15.793855   26955 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1101 00:08:15.793866   26955 command_runner.go:130] > # the cgroup blockio controller.
	I1101 00:08:15.793876   26955 command_runner.go:130] > # blockio_config_file = ""
	I1101 00:08:15.793890   26955 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1101 00:08:15.793900   26955 command_runner.go:130] > # irqbalance daemon.
	I1101 00:08:15.793912   26955 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1101 00:08:15.793922   26955 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1101 00:08:15.793934   26955 command_runner.go:130] > # This option supports live configuration reload.
	I1101 00:08:15.793945   26955 command_runner.go:130] > # rdt_config_file = ""
	I1101 00:08:15.793959   26955 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1101 00:08:15.793970   26955 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1101 00:08:15.793984   26955 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1101 00:08:15.793995   26955 command_runner.go:130] > # separate_pull_cgroup = ""
	I1101 00:08:15.794008   26955 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1101 00:08:15.794018   26955 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1101 00:08:15.794026   26955 command_runner.go:130] > # will be added.
	I1101 00:08:15.794034   26955 command_runner.go:130] > # default_capabilities = [
	I1101 00:08:15.794045   26955 command_runner.go:130] > # 	"CHOWN",
	I1101 00:08:15.794052   26955 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1101 00:08:15.794062   26955 command_runner.go:130] > # 	"FSETID",
	I1101 00:08:15.794072   26955 command_runner.go:130] > # 	"FOWNER",
	I1101 00:08:15.794081   26955 command_runner.go:130] > # 	"SETGID",
	I1101 00:08:15.794091   26955 command_runner.go:130] > # 	"SETUID",
	I1101 00:08:15.794100   26955 command_runner.go:130] > # 	"SETPCAP",
	I1101 00:08:15.794113   26955 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1101 00:08:15.794119   26955 command_runner.go:130] > # 	"KILL",
	I1101 00:08:15.794125   26955 command_runner.go:130] > # ]
	I1101 00:08:15.794142   26955 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1101 00:08:15.794157   26955 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1101 00:08:15.794168   26955 command_runner.go:130] > # default_sysctls = [
	I1101 00:08:15.794177   26955 command_runner.go:130] > # ]
	I1101 00:08:15.794188   26955 command_runner.go:130] > # List of devices on the host that a
	I1101 00:08:15.794203   26955 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1101 00:08:15.794213   26955 command_runner.go:130] > # allowed_devices = [
	I1101 00:08:15.794222   26955 command_runner.go:130] > # 	"/dev/fuse",
	I1101 00:08:15.794228   26955 command_runner.go:130] > # ]
	I1101 00:08:15.794236   26955 command_runner.go:130] > # List of additional devices. specified as
	I1101 00:08:15.794252   26955 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1101 00:08:15.794265   26955 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1101 00:08:15.794316   26955 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1101 00:08:15.794331   26955 command_runner.go:130] > # additional_devices = [
	I1101 00:08:15.794337   26955 command_runner.go:130] > # ]
	I1101 00:08:15.794347   26955 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1101 00:08:15.794358   26955 command_runner.go:130] > # cdi_spec_dirs = [
	I1101 00:08:15.794367   26955 command_runner.go:130] > # 	"/etc/cdi",
	I1101 00:08:15.794381   26955 command_runner.go:130] > # 	"/var/run/cdi",
	I1101 00:08:15.794391   26955 command_runner.go:130] > # ]
	I1101 00:08:15.794404   26955 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1101 00:08:15.794413   26955 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1101 00:08:15.794423   26955 command_runner.go:130] > # Defaults to false.
	I1101 00:08:15.794435   26955 command_runner.go:130] > # device_ownership_from_security_context = false
	I1101 00:08:15.794455   26955 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1101 00:08:15.794471   26955 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1101 00:08:15.794481   26955 command_runner.go:130] > # hooks_dir = [
	I1101 00:08:15.794492   26955 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1101 00:08:15.794499   26955 command_runner.go:130] > # ]
	I1101 00:08:15.794507   26955 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1101 00:08:15.794522   26955 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1101 00:08:15.794535   26955 command_runner.go:130] > # its default mounts from the following two files:
	I1101 00:08:15.794541   26955 command_runner.go:130] > #
	I1101 00:08:15.794555   26955 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1101 00:08:15.794569   26955 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1101 00:08:15.794582   26955 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1101 00:08:15.794594   26955 command_runner.go:130] > #
	I1101 00:08:15.794605   26955 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1101 00:08:15.794616   26955 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1101 00:08:15.794631   26955 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1101 00:08:15.794643   26955 command_runner.go:130] > #      only add mounts it finds in this file.
	I1101 00:08:15.794651   26955 command_runner.go:130] > #
	I1101 00:08:15.794662   26955 command_runner.go:130] > # default_mounts_file = ""
	I1101 00:08:15.794674   26955 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1101 00:08:15.794688   26955 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1101 00:08:15.794695   26955 command_runner.go:130] > pids_limit = 1024
	I1101 00:08:15.794705   26955 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1101 00:08:15.794719   26955 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1101 00:08:15.794733   26955 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1101 00:08:15.794750   26955 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1101 00:08:15.794760   26955 command_runner.go:130] > # log_size_max = -1
	I1101 00:08:15.794774   26955 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1101 00:08:15.794783   26955 command_runner.go:130] > # log_to_journald = false
	I1101 00:08:15.794793   26955 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1101 00:08:15.794812   26955 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1101 00:08:15.794825   26955 command_runner.go:130] > # Path to directory for container attach sockets.
	I1101 00:08:15.794834   26955 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1101 00:08:15.794855   26955 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1101 00:08:15.794862   26955 command_runner.go:130] > # bind_mount_prefix = ""
	I1101 00:08:15.794872   26955 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1101 00:08:15.794879   26955 command_runner.go:130] > # read_only = false
	I1101 00:08:15.794890   26955 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1101 00:08:15.794898   26955 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1101 00:08:15.794902   26955 command_runner.go:130] > # live configuration reload.
	I1101 00:08:15.794908   26955 command_runner.go:130] > # log_level = "info"
	I1101 00:08:15.794919   26955 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1101 00:08:15.794932   26955 command_runner.go:130] > # This option supports live configuration reload.
	I1101 00:08:15.794940   26955 command_runner.go:130] > # log_filter = ""
	I1101 00:08:15.794953   26955 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1101 00:08:15.794967   26955 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1101 00:08:15.794980   26955 command_runner.go:130] > # separated by comma.
	I1101 00:08:15.794990   26955 command_runner.go:130] > # uid_mappings = ""
	I1101 00:08:15.795005   26955 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1101 00:08:15.795018   26955 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1101 00:08:15.795028   26955 command_runner.go:130] > # separated by comma.
	I1101 00:08:15.795039   26955 command_runner.go:130] > # gid_mappings = ""
	I1101 00:08:15.795050   26955 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1101 00:08:15.795064   26955 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1101 00:08:15.795078   26955 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1101 00:08:15.795088   26955 command_runner.go:130] > # minimum_mappable_uid = -1
	I1101 00:08:15.795102   26955 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1101 00:08:15.795112   26955 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1101 00:08:15.795125   26955 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1101 00:08:15.795136   26955 command_runner.go:130] > # minimum_mappable_gid = -1
	I1101 00:08:15.795148   26955 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1101 00:08:15.795162   26955 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1101 00:08:15.795175   26955 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1101 00:08:15.795186   26955 command_runner.go:130] > # ctr_stop_timeout = 30
	I1101 00:08:15.795199   26955 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1101 00:08:15.795208   26955 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1101 00:08:15.795234   26955 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1101 00:08:15.795247   26955 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1101 00:08:15.795267   26955 command_runner.go:130] > drop_infra_ctr = false
	I1101 00:08:15.795281   26955 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1101 00:08:15.795294   26955 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1101 00:08:15.795309   26955 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1101 00:08:15.795317   26955 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1101 00:08:15.795329   26955 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1101 00:08:15.795342   26955 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1101 00:08:15.795353   26955 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1101 00:08:15.795368   26955 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1101 00:08:15.795379   26955 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1101 00:08:15.795392   26955 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1101 00:08:15.795403   26955 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1101 00:08:15.795415   26955 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1101 00:08:15.795426   26955 command_runner.go:130] > # default_runtime = "runc"
	I1101 00:08:15.795436   26955 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1101 00:08:15.795457   26955 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1101 00:08:15.795478   26955 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1101 00:08:15.795490   26955 command_runner.go:130] > # creation as a file is not desired either.
	I1101 00:08:15.795502   26955 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1101 00:08:15.795513   26955 command_runner.go:130] > # the hostname is being managed dynamically.
	I1101 00:08:15.795525   26955 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1101 00:08:15.795535   26955 command_runner.go:130] > # ]
	I1101 00:08:15.795549   26955 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1101 00:08:15.795563   26955 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1101 00:08:15.795577   26955 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1101 00:08:15.795590   26955 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1101 00:08:15.795596   26955 command_runner.go:130] > #
	I1101 00:08:15.795603   26955 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1101 00:08:15.795615   26955 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1101 00:08:15.795627   26955 command_runner.go:130] > #  runtime_type = "oci"
	I1101 00:08:15.795635   26955 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1101 00:08:15.795643   26955 command_runner.go:130] > #  privileged_without_host_devices = false
	I1101 00:08:15.795654   26955 command_runner.go:130] > #  allowed_annotations = []
	I1101 00:08:15.795664   26955 command_runner.go:130] > # Where:
	I1101 00:08:15.795680   26955 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1101 00:08:15.795694   26955 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1101 00:08:15.795705   26955 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1101 00:08:15.795716   26955 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1101 00:08:15.795726   26955 command_runner.go:130] > #   in $PATH.
	I1101 00:08:15.795741   26955 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1101 00:08:15.795753   26955 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1101 00:08:15.795767   26955 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1101 00:08:15.795777   26955 command_runner.go:130] > #   state.
	I1101 00:08:15.795792   26955 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1101 00:08:15.795804   26955 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1101 00:08:15.795813   26955 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1101 00:08:15.795825   26955 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1101 00:08:15.795840   26955 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1101 00:08:15.795860   26955 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1101 00:08:15.795872   26955 command_runner.go:130] > #   The currently recognized values are:
	I1101 00:08:15.795886   26955 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1101 00:08:15.795901   26955 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1101 00:08:15.795912   26955 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1101 00:08:15.795926   26955 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1101 00:08:15.795952   26955 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1101 00:08:15.795969   26955 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1101 00:08:15.795983   26955 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1101 00:08:15.795998   26955 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1101 00:08:15.796009   26955 command_runner.go:130] > #   should be moved to the container's cgroup
	I1101 00:08:15.796019   26955 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1101 00:08:15.796026   26955 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1101 00:08:15.796038   26955 command_runner.go:130] > runtime_type = "oci"
	I1101 00:08:15.796050   26955 command_runner.go:130] > runtime_root = "/run/runc"
	I1101 00:08:15.796058   26955 command_runner.go:130] > runtime_config_path = ""
	I1101 00:08:15.796068   26955 command_runner.go:130] > monitor_path = ""
	I1101 00:08:15.796078   26955 command_runner.go:130] > monitor_cgroup = ""
	I1101 00:08:15.796088   26955 command_runner.go:130] > monitor_exec_cgroup = ""
	I1101 00:08:15.796102   26955 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1101 00:08:15.796111   26955 command_runner.go:130] > # running containers
	I1101 00:08:15.796119   26955 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1101 00:08:15.796135   26955 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1101 00:08:15.796199   26955 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1101 00:08:15.796211   26955 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1101 00:08:15.796221   26955 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1101 00:08:15.796233   26955 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1101 00:08:15.796244   26955 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1101 00:08:15.796256   26955 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1101 00:08:15.796267   26955 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1101 00:08:15.796278   26955 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1101 00:08:15.796293   26955 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1101 00:08:15.796305   26955 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1101 00:08:15.796314   26955 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1101 00:08:15.796328   26955 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1101 00:08:15.796344   26955 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1101 00:08:15.796357   26955 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1101 00:08:15.796375   26955 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1101 00:08:15.796391   26955 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1101 00:08:15.796403   26955 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1101 00:08:15.796417   26955 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1101 00:08:15.796427   26955 command_runner.go:130] > # Example:
	I1101 00:08:15.796436   26955 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1101 00:08:15.796454   26955 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1101 00:08:15.796466   26955 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1101 00:08:15.796478   26955 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1101 00:08:15.796487   26955 command_runner.go:130] > # cpuset = 0
	I1101 00:08:15.796494   26955 command_runner.go:130] > # cpushares = "0-1"
	I1101 00:08:15.796498   26955 command_runner.go:130] > # Where:
	I1101 00:08:15.796510   26955 command_runner.go:130] > # The workload name is workload-type.
	I1101 00:08:15.796525   26955 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1101 00:08:15.796542   26955 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1101 00:08:15.796555   26955 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1101 00:08:15.796571   26955 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1101 00:08:15.796584   26955 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1101 00:08:15.796590   26955 command_runner.go:130] > # 
	I1101 00:08:15.796599   26955 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1101 00:08:15.796608   26955 command_runner.go:130] > #
	I1101 00:08:15.796626   26955 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1101 00:08:15.796641   26955 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1101 00:08:15.796656   26955 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1101 00:08:15.796670   26955 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1101 00:08:15.796682   26955 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1101 00:08:15.796689   26955 command_runner.go:130] > [crio.image]
	I1101 00:08:15.796696   26955 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1101 00:08:15.796709   26955 command_runner.go:130] > # default_transport = "docker://"
	I1101 00:08:15.796723   26955 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1101 00:08:15.796738   26955 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1101 00:08:15.796748   26955 command_runner.go:130] > # global_auth_file = ""
	I1101 00:08:15.796760   26955 command_runner.go:130] > # The image used to instantiate infra containers.
	I1101 00:08:15.796772   26955 command_runner.go:130] > # This option supports live configuration reload.
	I1101 00:08:15.796783   26955 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1101 00:08:15.796790   26955 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1101 00:08:15.796798   26955 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1101 00:08:15.796806   26955 command_runner.go:130] > # This option supports live configuration reload.
	I1101 00:08:15.796815   26955 command_runner.go:130] > # pause_image_auth_file = ""
	I1101 00:08:15.796830   26955 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1101 00:08:15.796841   26955 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1101 00:08:15.796851   26955 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1101 00:08:15.796861   26955 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1101 00:08:15.796869   26955 command_runner.go:130] > # pause_command = "/pause"
	I1101 00:08:15.796875   26955 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1101 00:08:15.796882   26955 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1101 00:08:15.796895   26955 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1101 00:08:15.796906   26955 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1101 00:08:15.796915   26955 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1101 00:08:15.796922   26955 command_runner.go:130] > # signature_policy = ""
	I1101 00:08:15.796932   26955 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1101 00:08:15.796943   26955 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1101 00:08:15.796949   26955 command_runner.go:130] > # changing them here.
	I1101 00:08:15.796956   26955 command_runner.go:130] > # insecure_registries = [
	I1101 00:08:15.796963   26955 command_runner.go:130] > # ]
	I1101 00:08:15.796975   26955 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1101 00:08:15.796987   26955 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1101 00:08:15.797002   26955 command_runner.go:130] > # image_volumes = "mkdir"
	I1101 00:08:15.797014   26955 command_runner.go:130] > # Temporary directory to use for storing big files
	I1101 00:08:15.797025   26955 command_runner.go:130] > # big_files_temporary_dir = ""
	I1101 00:08:15.797039   26955 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1101 00:08:15.797049   26955 command_runner.go:130] > # CNI plugins.
	I1101 00:08:15.797055   26955 command_runner.go:130] > [crio.network]
	I1101 00:08:15.797065   26955 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1101 00:08:15.797078   26955 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1101 00:08:15.797088   26955 command_runner.go:130] > # cni_default_network = ""
	I1101 00:08:15.797101   26955 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1101 00:08:15.797113   26955 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1101 00:08:15.797126   26955 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1101 00:08:15.797135   26955 command_runner.go:130] > # plugin_dirs = [
	I1101 00:08:15.797141   26955 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1101 00:08:15.797147   26955 command_runner.go:130] > # ]
	I1101 00:08:15.797160   26955 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1101 00:08:15.797171   26955 command_runner.go:130] > [crio.metrics]
	I1101 00:08:15.797180   26955 command_runner.go:130] > # Globally enable or disable metrics support.
	I1101 00:08:15.797193   26955 command_runner.go:130] > enable_metrics = true
	I1101 00:08:15.797205   26955 command_runner.go:130] > # Specify enabled metrics collectors.
	I1101 00:08:15.797216   26955 command_runner.go:130] > # Per default all metrics are enabled.
	I1101 00:08:15.797229   26955 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1101 00:08:15.797242   26955 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1101 00:08:15.797253   26955 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1101 00:08:15.797264   26955 command_runner.go:130] > # metrics_collectors = [
	I1101 00:08:15.797274   26955 command_runner.go:130] > # 	"operations",
	I1101 00:08:15.797283   26955 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1101 00:08:15.797295   26955 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1101 00:08:15.797306   26955 command_runner.go:130] > # 	"operations_errors",
	I1101 00:08:15.797316   26955 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1101 00:08:15.797326   26955 command_runner.go:130] > # 	"image_pulls_by_name",
	I1101 00:08:15.797337   26955 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1101 00:08:15.797344   26955 command_runner.go:130] > # 	"image_pulls_failures",
	I1101 00:08:15.797349   26955 command_runner.go:130] > # 	"image_pulls_successes",
	I1101 00:08:15.797359   26955 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1101 00:08:15.797370   26955 command_runner.go:130] > # 	"image_layer_reuse",
	I1101 00:08:15.797382   26955 command_runner.go:130] > # 	"containers_oom_total",
	I1101 00:08:15.797393   26955 command_runner.go:130] > # 	"containers_oom",
	I1101 00:08:15.797403   26955 command_runner.go:130] > # 	"processes_defunct",
	I1101 00:08:15.797413   26955 command_runner.go:130] > # 	"operations_total",
	I1101 00:08:15.797424   26955 command_runner.go:130] > # 	"operations_latency_seconds",
	I1101 00:08:15.797435   26955 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1101 00:08:15.797442   26955 command_runner.go:130] > # 	"operations_errors_total",
	I1101 00:08:15.797452   26955 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1101 00:08:15.797464   26955 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1101 00:08:15.797475   26955 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1101 00:08:15.797490   26955 command_runner.go:130] > # 	"image_pulls_success_total",
	I1101 00:08:15.797501   26955 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1101 00:08:15.797511   26955 command_runner.go:130] > # 	"containers_oom_count_total",
	I1101 00:08:15.797520   26955 command_runner.go:130] > # ]
	I1101 00:08:15.797531   26955 command_runner.go:130] > # The port on which the metrics server will listen.
	I1101 00:08:15.797537   26955 command_runner.go:130] > # metrics_port = 9090
	I1101 00:08:15.797546   26955 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1101 00:08:15.797556   26955 command_runner.go:130] > # metrics_socket = ""
	I1101 00:08:15.797572   26955 command_runner.go:130] > # The certificate for the secure metrics server.
	I1101 00:08:15.797586   26955 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1101 00:08:15.797600   26955 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1101 00:08:15.797611   26955 command_runner.go:130] > # certificate on any modification event.
	I1101 00:08:15.797619   26955 command_runner.go:130] > # metrics_cert = ""
	I1101 00:08:15.797625   26955 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1101 00:08:15.797638   26955 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1101 00:08:15.797649   26955 command_runner.go:130] > # metrics_key = ""
	I1101 00:08:15.797659   26955 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1101 00:08:15.797670   26955 command_runner.go:130] > [crio.tracing]
	I1101 00:08:15.797682   26955 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1101 00:08:15.797693   26955 command_runner.go:130] > # enable_tracing = false
	I1101 00:08:15.797705   26955 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1101 00:08:15.797715   26955 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1101 00:08:15.797723   26955 command_runner.go:130] > # Number of samples to collect per million spans.
	I1101 00:08:15.797733   26955 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1101 00:08:15.797747   26955 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1101 00:08:15.797758   26955 command_runner.go:130] > [crio.stats]
	I1101 00:08:15.797775   26955 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1101 00:08:15.797788   26955 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1101 00:08:15.797799   26955 command_runner.go:130] > # stats_collection_period = 0
	I1101 00:08:15.797908   26955 cni.go:84] Creating CNI manager for ""
	I1101 00:08:15.797922   26955 cni.go:136] 1 nodes found, recommending kindnet
	I1101 00:08:15.797937   26955 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 00:08:15.797961   26955 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.130 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-600483 NodeName:multinode-600483 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.130"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.130 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 00:08:15.798130   26955 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.130
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-600483"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.130
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.130"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 00:08:15.798218   26955 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-600483 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.130
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-600483 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 00:08:15.798288   26955 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 00:08:15.807630   26955 command_runner.go:130] > kubeadm
	I1101 00:08:15.807651   26955 command_runner.go:130] > kubectl
	I1101 00:08:15.807658   26955 command_runner.go:130] > kubelet
	I1101 00:08:15.807770   26955 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 00:08:15.807830   26955 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 00:08:15.816112   26955 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I1101 00:08:15.832763   26955 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 00:08:15.849175   26955 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1101 00:08:15.865723   26955 ssh_runner.go:195] Run: grep 192.168.39.130	control-plane.minikube.internal$ /etc/hosts
	I1101 00:08:15.869443   26955 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.130	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 00:08:15.880838   26955 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483 for IP: 192.168.39.130
	I1101 00:08:15.880871   26955 certs.go:190] acquiring lock for shared ca certs: {Name:mk6185b3938c4b51e80f57b7c81926c81b632b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:08:15.881020   26955 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key
	I1101 00:08:15.881073   26955 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key
	I1101 00:08:15.881114   26955 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/client.key
	I1101 00:08:15.881125   26955 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/client.crt with IP's: []
	I1101 00:08:16.132607   26955 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/client.crt ...
	I1101 00:08:16.132634   26955 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/client.crt: {Name:mk0d6a63f15e2ebcbc978ac8c09fdef3faf47b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:08:16.132796   26955 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/client.key ...
	I1101 00:08:16.132807   26955 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/client.key: {Name:mk67958a1fb54660350c86c1a13b960726a827f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:08:16.132881   26955 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/apiserver.key.3e334af8
	I1101 00:08:16.132894   26955 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/apiserver.crt.3e334af8 with IP's: [192.168.39.130 10.96.0.1 127.0.0.1 10.0.0.1]
	I1101 00:08:16.284729   26955 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/apiserver.crt.3e334af8 ...
	I1101 00:08:16.284766   26955 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/apiserver.crt.3e334af8: {Name:mkf390bd95fe868c9195aa46e9fe54842734c2d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:08:16.284919   26955 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/apiserver.key.3e334af8 ...
	I1101 00:08:16.284931   26955 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/apiserver.key.3e334af8: {Name:mk94e8b1f70dcdbe035c240f42b430cfccfc9206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:08:16.285004   26955 certs.go:337] copying /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/apiserver.crt.3e334af8 -> /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/apiserver.crt
	I1101 00:08:16.285084   26955 certs.go:341] copying /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/apiserver.key.3e334af8 -> /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/apiserver.key
	I1101 00:08:16.285142   26955 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/proxy-client.key
	I1101 00:08:16.285155   26955 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/proxy-client.crt with IP's: []
	I1101 00:08:16.493580   26955 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/proxy-client.crt ...
	I1101 00:08:16.493609   26955 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/proxy-client.crt: {Name:mk1064250c0e45adfe86ace8ca75e318808583d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:08:16.493757   26955 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/proxy-client.key ...
	I1101 00:08:16.493767   26955 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/proxy-client.key: {Name:mkc01fa104c81478d10b6566227808193b070f7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:08:16.493821   26955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1101 00:08:16.493837   26955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1101 00:08:16.493847   26955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1101 00:08:16.493860   26955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1101 00:08:16.493872   26955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1101 00:08:16.493884   26955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1101 00:08:16.493896   26955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1101 00:08:16.493909   26955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1101 00:08:16.493961   26955 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem (1338 bytes)
	W1101 00:08:16.493997   26955 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504_empty.pem, impossibly tiny 0 bytes
	I1101 00:08:16.494010   26955 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 00:08:16.494035   26955 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem (1078 bytes)
	I1101 00:08:16.494056   26955 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem (1123 bytes)
	I1101 00:08:16.494083   26955 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem (1675 bytes)
	I1101 00:08:16.494121   26955 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem (1708 bytes)
	I1101 00:08:16.494144   26955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:08:16.494157   26955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem -> /usr/share/ca-certificates/14504.pem
	I1101 00:08:16.494171   26955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> /usr/share/ca-certificates/145042.pem
	I1101 00:08:16.494730   26955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 00:08:16.523671   26955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 00:08:16.547666   26955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 00:08:16.573586   26955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 00:08:16.599476   26955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 00:08:16.625182   26955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 00:08:16.649466   26955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 00:08:16.672743   26955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 00:08:16.695507   26955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 00:08:16.719279   26955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem --> /usr/share/ca-certificates/14504.pem (1338 bytes)
	I1101 00:08:16.742568   26955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /usr/share/ca-certificates/145042.pem (1708 bytes)
	I1101 00:08:16.764864   26955 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 00:08:16.781104   26955 ssh_runner.go:195] Run: openssl version
	I1101 00:08:16.786510   26955 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1101 00:08:16.786652   26955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 00:08:16.796290   26955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:08:16.800992   26955 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:08:16.801023   26955 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:08:16.801079   26955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:08:16.806236   26955 command_runner.go:130] > b5213941
	I1101 00:08:16.806570   26955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 00:08:16.816108   26955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14504.pem && ln -fs /usr/share/ca-certificates/14504.pem /etc/ssl/certs/14504.pem"
	I1101 00:08:16.825589   26955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14504.pem
	I1101 00:08:16.830074   26955 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 31 23:54 /usr/share/ca-certificates/14504.pem
	I1101 00:08:16.830110   26955 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:54 /usr/share/ca-certificates/14504.pem
	I1101 00:08:16.830186   26955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14504.pem
	I1101 00:08:16.835856   26955 command_runner.go:130] > 51391683
	I1101 00:08:16.835963   26955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14504.pem /etc/ssl/certs/51391683.0"
	I1101 00:08:16.845691   26955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145042.pem && ln -fs /usr/share/ca-certificates/145042.pem /etc/ssl/certs/145042.pem"
	I1101 00:08:16.855191   26955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145042.pem
	I1101 00:08:16.859705   26955 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 31 23:54 /usr/share/ca-certificates/145042.pem
	I1101 00:08:16.859734   26955 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:54 /usr/share/ca-certificates/145042.pem
	I1101 00:08:16.859786   26955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145042.pem
	I1101 00:08:16.864881   26955 command_runner.go:130] > 3ec20f2e
	I1101 00:08:16.865143   26955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145042.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 00:08:16.875314   26955 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 00:08:16.879194   26955 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1101 00:08:16.879621   26955 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1101 00:08:16.879677   26955 kubeadm.go:404] StartCluster: {Name:multinode-600483 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.3 ClusterName:multinode-600483 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.130 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:08:16.879771   26955 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 00:08:16.879851   26955 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 00:08:16.922863   26955 cri.go:89] found id: ""
	I1101 00:08:16.922928   26955 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 00:08:16.931431   26955 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1101 00:08:16.931454   26955 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1101 00:08:16.931460   26955 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1101 00:08:16.931529   26955 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 00:08:16.939688   26955 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 00:08:16.948338   26955 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1101 00:08:16.948374   26955 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1101 00:08:16.948382   26955 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1101 00:08:16.948409   26955 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 00:08:16.948722   26955 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 00:08:16.948763   26955 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1101 00:08:17.342832   26955 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 00:08:17.342854   26955 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 00:08:30.383842   26955 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1101 00:08:30.383869   26955 command_runner.go:130] > [init] Using Kubernetes version: v1.28.3
	I1101 00:08:30.383925   26955 kubeadm.go:322] [preflight] Running pre-flight checks
	I1101 00:08:30.383949   26955 command_runner.go:130] > [preflight] Running pre-flight checks
	I1101 00:08:30.384072   26955 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 00:08:30.384102   26955 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 00:08:30.384247   26955 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 00:08:30.384260   26955 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 00:08:30.384374   26955 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 00:08:30.384389   26955 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 00:08:30.384501   26955 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 00:08:30.386233   26955 out.go:204]   - Generating certificates and keys ...
	I1101 00:08:30.384569   26955 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 00:08:30.386313   26955 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1101 00:08:30.386335   26955 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1101 00:08:30.386420   26955 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1101 00:08:30.386444   26955 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1101 00:08:30.386562   26955 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 00:08:30.386577   26955 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 00:08:30.386646   26955 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1101 00:08:30.386658   26955 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1101 00:08:30.386747   26955 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1101 00:08:30.386766   26955 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1101 00:08:30.386827   26955 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1101 00:08:30.386837   26955 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1101 00:08:30.386901   26955 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1101 00:08:30.386917   26955 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1101 00:08:30.387081   26955 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-600483] and IPs [192.168.39.130 127.0.0.1 ::1]
	I1101 00:08:30.387103   26955 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-600483] and IPs [192.168.39.130 127.0.0.1 ::1]
	I1101 00:08:30.387169   26955 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1101 00:08:30.387184   26955 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1101 00:08:30.387335   26955 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-600483] and IPs [192.168.39.130 127.0.0.1 ::1]
	I1101 00:08:30.387349   26955 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-600483] and IPs [192.168.39.130 127.0.0.1 ::1]
	I1101 00:08:30.387437   26955 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 00:08:30.387447   26955 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 00:08:30.387531   26955 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 00:08:30.387539   26955 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 00:08:30.387608   26955 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1101 00:08:30.387620   26955 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1101 00:08:30.387687   26955 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 00:08:30.387697   26955 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 00:08:30.387761   26955 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 00:08:30.387782   26955 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 00:08:30.387955   26955 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 00:08:30.387978   26955 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 00:08:30.388065   26955 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 00:08:30.388073   26955 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 00:08:30.388127   26955 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 00:08:30.388138   26955 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 00:08:30.388230   26955 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 00:08:30.388241   26955 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 00:08:30.388341   26955 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 00:08:30.388355   26955 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 00:08:30.390045   26955 out.go:204]   - Booting up control plane ...
	I1101 00:08:30.390159   26955 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 00:08:30.390179   26955 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 00:08:30.390274   26955 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 00:08:30.390295   26955 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 00:08:30.390386   26955 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 00:08:30.390406   26955 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 00:08:30.390553   26955 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 00:08:30.390593   26955 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 00:08:30.390726   26955 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 00:08:30.390738   26955 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 00:08:30.390782   26955 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1101 00:08:30.390791   26955 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1101 00:08:30.390990   26955 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 00:08:30.391004   26955 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 00:08:30.391114   26955 command_runner.go:130] > [apiclient] All control plane components are healthy after 9.006945 seconds
	I1101 00:08:30.391129   26955 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.006945 seconds
	I1101 00:08:30.391274   26955 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 00:08:30.391285   26955 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 00:08:30.391448   26955 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 00:08:30.391457   26955 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 00:08:30.391543   26955 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1101 00:08:30.391560   26955 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 00:08:30.391789   26955 command_runner.go:130] > [mark-control-plane] Marking the node multinode-600483 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 00:08:30.391804   26955 kubeadm.go:322] [mark-control-plane] Marking the node multinode-600483 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 00:08:30.391890   26955 command_runner.go:130] > [bootstrap-token] Using token: uek2jk.g70k24um933yl1ag
	I1101 00:08:30.391903   26955 kubeadm.go:322] [bootstrap-token] Using token: uek2jk.g70k24um933yl1ag
	I1101 00:08:30.393479   26955 out.go:204]   - Configuring RBAC rules ...
	I1101 00:08:30.393605   26955 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 00:08:30.393616   26955 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 00:08:30.393694   26955 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 00:08:30.393716   26955 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 00:08:30.393908   26955 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 00:08:30.393923   26955 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 00:08:30.394038   26955 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 00:08:30.394048   26955 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 00:08:30.394137   26955 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 00:08:30.394152   26955 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 00:08:30.394219   26955 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 00:08:30.394237   26955 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 00:08:30.394395   26955 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 00:08:30.394404   26955 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 00:08:30.394464   26955 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1101 00:08:30.394482   26955 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1101 00:08:30.394556   26955 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1101 00:08:30.394566   26955 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1101 00:08:30.394573   26955 kubeadm.go:322] 
	I1101 00:08:30.394626   26955 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1101 00:08:30.394633   26955 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1101 00:08:30.394636   26955 kubeadm.go:322] 
	I1101 00:08:30.394737   26955 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1101 00:08:30.394750   26955 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1101 00:08:30.394756   26955 kubeadm.go:322] 
	I1101 00:08:30.394788   26955 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1101 00:08:30.394808   26955 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1101 00:08:30.394886   26955 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 00:08:30.394889   26955 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 00:08:30.394953   26955 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 00:08:30.394962   26955 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 00:08:30.394967   26955 kubeadm.go:322] 
	I1101 00:08:30.395048   26955 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1101 00:08:30.395057   26955 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1101 00:08:30.395067   26955 kubeadm.go:322] 
	I1101 00:08:30.395160   26955 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 00:08:30.395170   26955 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 00:08:30.395176   26955 kubeadm.go:322] 
	I1101 00:08:30.395243   26955 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1101 00:08:30.395251   26955 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1101 00:08:30.395346   26955 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 00:08:30.395358   26955 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 00:08:30.395415   26955 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 00:08:30.395421   26955 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 00:08:30.395424   26955 kubeadm.go:322] 
	I1101 00:08:30.395491   26955 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1101 00:08:30.395497   26955 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 00:08:30.395558   26955 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1101 00:08:30.395572   26955 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1101 00:08:30.395592   26955 kubeadm.go:322] 
	I1101 00:08:30.395699   26955 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token uek2jk.g70k24um933yl1ag \
	I1101 00:08:30.395711   26955 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token uek2jk.g70k24um933yl1ag \
	I1101 00:08:30.395837   26955 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 \
	I1101 00:08:30.395857   26955 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 \
	I1101 00:08:30.395900   26955 command_runner.go:130] > 	--control-plane 
	I1101 00:08:30.395910   26955 kubeadm.go:322] 	--control-plane 
	I1101 00:08:30.395918   26955 kubeadm.go:322] 
	I1101 00:08:30.396028   26955 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1101 00:08:30.396032   26955 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1101 00:08:30.396045   26955 kubeadm.go:322] 
	I1101 00:08:30.396153   26955 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token uek2jk.g70k24um933yl1ag \
	I1101 00:08:30.396164   26955 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token uek2jk.g70k24um933yl1ag \
	I1101 00:08:30.396271   26955 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 
	I1101 00:08:30.396290   26955 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 
	I1101 00:08:30.396297   26955 cni.go:84] Creating CNI manager for ""
	I1101 00:08:30.396302   26955 cni.go:136] 1 nodes found, recommending kindnet
	I1101 00:08:30.397980   26955 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1101 00:08:30.399161   26955 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 00:08:30.416242   26955 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1101 00:08:30.416268   26955 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1101 00:08:30.416277   26955 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1101 00:08:30.416288   26955 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1101 00:08:30.416297   26955 command_runner.go:130] > Access: 2023-11-01 00:07:57.714642713 +0000
	I1101 00:08:30.416305   26955 command_runner.go:130] > Modify: 2023-10-31 23:04:20.000000000 +0000
	I1101 00:08:30.416314   26955 command_runner.go:130] > Change: 2023-11-01 00:07:55.940642713 +0000
	I1101 00:08:30.416320   26955 command_runner.go:130] >  Birth: -
	I1101 00:08:30.417460   26955 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1101 00:08:30.417495   26955 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1101 00:08:30.471277   26955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 00:08:31.431998   26955 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1101 00:08:31.432020   26955 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1101 00:08:31.432026   26955 command_runner.go:130] > serviceaccount/kindnet created
	I1101 00:08:31.432031   26955 command_runner.go:130] > daemonset.apps/kindnet created
	I1101 00:08:31.432060   26955 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 00:08:31.432175   26955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:08:31.432209   26955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9 minikube.k8s.io/name=multinode-600483 minikube.k8s.io/updated_at=2023_11_01T00_08_31_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:08:31.462972   26955 command_runner.go:130] > -16
	I1101 00:08:31.463061   26955 ops.go:34] apiserver oom_adj: -16
	I1101 00:08:31.645441   26955 command_runner.go:130] > node/multinode-600483 labeled
	I1101 00:08:31.645523   26955 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1101 00:08:31.645624   26955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:08:31.728499   26955 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 00:08:31.730228   26955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:08:31.813688   26955 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 00:08:32.315888   26955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:08:32.403637   26955 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 00:08:32.816234   26955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:08:32.896900   26955 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 00:08:33.316063   26955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:08:33.395109   26955 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 00:08:33.816260   26955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:08:33.903000   26955 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 00:08:34.315592   26955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:08:34.410887   26955 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 00:08:34.815412   26955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:08:34.895718   26955 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 00:08:35.315814   26955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:08:35.395635   26955 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 00:08:35.816036   26955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:08:35.900126   26955 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 00:08:36.315718   26955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:08:36.407073   26955 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 00:08:36.815597   26955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:08:36.903394   26955 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 00:08:37.315834   26955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:08:37.399358   26955 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 00:08:37.815989   26955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:08:37.894541   26955 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 00:08:38.316177   26955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:08:38.403618   26955 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 00:08:38.816021   26955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:08:38.943273   26955 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 00:08:39.315837   26955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:08:39.405273   26955 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 00:08:39.815328   26955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:08:39.905156   26955 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 00:08:40.315728   26955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:08:40.421125   26955 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 00:08:40.815613   26955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:08:40.913091   26955 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 00:08:41.315419   26955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:08:41.410383   26955 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 00:08:41.816083   26955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:08:41.920891   26955 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 00:08:42.315398   26955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:08:42.473987   26955 command_runner.go:130] > NAME      SECRETS   AGE
	I1101 00:08:42.474006   26955 command_runner.go:130] > default   0         0s
	I1101 00:08:42.475419   26955 kubeadm.go:1081] duration metric: took 11.043313823s to wait for elevateKubeSystemPrivileges.
	I1101 00:08:42.475453   26955 kubeadm.go:406] StartCluster complete in 25.595782385s
	I1101 00:08:42.475472   26955 settings.go:142] acquiring lock: {Name:mk7f269e64dfd8d176737f993e01f6e6badafbc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:08:42.475557   26955 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 00:08:42.476398   26955 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/kubeconfig: {Name:mk08da65b6c71084e1cfafb19800038e8c8303e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:08:42.476656   26955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 00:08:42.476753   26955 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1101 00:08:42.476817   26955 addons.go:69] Setting storage-provisioner=true in profile "multinode-600483"
	I1101 00:08:42.476819   26955 addons.go:69] Setting default-storageclass=true in profile "multinode-600483"
	I1101 00:08:42.476836   26955 addons.go:231] Setting addon storage-provisioner=true in "multinode-600483"
	I1101 00:08:42.476843   26955 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-600483"
	I1101 00:08:42.476881   26955 config.go:182] Loaded profile config "multinode-600483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:08:42.476887   26955 host.go:66] Checking if "multinode-600483" exists ...
	I1101 00:08:42.477034   26955 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 00:08:42.477336   26955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1101 00:08:42.477348   26955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1101 00:08:42.477372   26955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:08:42.477372   26955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:08:42.477374   26955 kapi.go:59] client config for multinode-600483: &rest.Config{Host:"https://192.168.39.130:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/client.crt", KeyFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/client.key", CAFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 00:08:42.478206   26955 cert_rotation.go:137] Starting client certificate rotation controller
	I1101 00:08:42.478466   26955 round_trippers.go:463] GET https://192.168.39.130:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1101 00:08:42.478480   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:42.478512   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:42.478527   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:42.497523   26955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35303
	I1101 00:08:42.498003   26955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45359
	I1101 00:08:42.498077   26955 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:08:42.498412   26955 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:08:42.498647   26955 main.go:141] libmachine: Using API Version  1
	I1101 00:08:42.498675   26955 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:08:42.498898   26955 main.go:141] libmachine: Using API Version  1
	I1101 00:08:42.498927   26955 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:08:42.499029   26955 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:08:42.499233   26955 main.go:141] libmachine: (multinode-600483) Calling .GetState
	I1101 00:08:42.499272   26955 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:08:42.499743   26955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1101 00:08:42.499780   26955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:08:42.501495   26955 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 00:08:42.501703   26955 kapi.go:59] client config for multinode-600483: &rest.Config{Host:"https://192.168.39.130:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/client.crt", KeyFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/client.key", CAFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 00:08:42.501916   26955 addons.go:231] Setting addon default-storageclass=true in "multinode-600483"
	I1101 00:08:42.501948   26955 host.go:66] Checking if "multinode-600483" exists ...
	I1101 00:08:42.502244   26955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1101 00:08:42.502278   26955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:08:42.516343   26955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39887
	I1101 00:08:42.516867   26955 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:08:42.517338   26955 main.go:141] libmachine: Using API Version  1
	I1101 00:08:42.517363   26955 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:08:42.517392   26955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44137
	I1101 00:08:42.517729   26955 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:08:42.517794   26955 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:08:42.517822   26955 round_trippers.go:574] Response Status: 200 OK in 39 milliseconds
	I1101 00:08:42.517840   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:42.517854   26955 round_trippers.go:580]     Content-Length: 291
	I1101 00:08:42.517869   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:42 GMT
	I1101 00:08:42.517878   26955 round_trippers.go:580]     Audit-Id: 4c9def47-47f2-4601-ab03-403ebb4a8aff
	I1101 00:08:42.517886   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:42.517900   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:42.517911   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:42.517925   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:42.517910   26955 main.go:141] libmachine: (multinode-600483) Calling .GetState
	I1101 00:08:42.518298   26955 main.go:141] libmachine: Using API Version  1
	I1101 00:08:42.518316   26955 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:08:42.518475   26955 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"21004493-8bb6-43e9-8ba2-65d98d570b24","resourceVersion":"314","creationTimestamp":"2023-11-01T00:08:30Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1101 00:08:42.518651   26955 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:08:42.519056   26955 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"21004493-8bb6-43e9-8ba2-65d98d570b24","resourceVersion":"314","creationTimestamp":"2023-11-01T00:08:30Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1101 00:08:42.519133   26955 round_trippers.go:463] PUT https://192.168.39.130:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1101 00:08:42.519145   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:42.519156   26955 round_trippers.go:473]     Content-Type: application/json
	I1101 00:08:42.519171   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:42.519181   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:42.519301   26955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1101 00:08:42.519347   26955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:08:42.519915   26955 main.go:141] libmachine: (multinode-600483) Calling .DriverName
	I1101 00:08:42.522091   26955 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 00:08:42.523573   26955 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 00:08:42.523596   26955 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 00:08:42.523618   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHHostname
	I1101 00:08:42.526980   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:42.527404   26955 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:08:42.527437   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:42.527771   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHPort
	I1101 00:08:42.528020   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:08:42.528206   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHUsername
	I1101 00:08:42.528369   26955 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483/id_rsa Username:docker}
	I1101 00:08:42.535350   26955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34215
	I1101 00:08:42.535766   26955 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:08:42.536265   26955 main.go:141] libmachine: Using API Version  1
	I1101 00:08:42.536287   26955 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:08:42.536623   26955 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:08:42.536837   26955 main.go:141] libmachine: (multinode-600483) Calling .GetState
	I1101 00:08:42.538503   26955 main.go:141] libmachine: (multinode-600483) Calling .DriverName
	I1101 00:08:42.538761   26955 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 00:08:42.538778   26955 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 00:08:42.538797   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHHostname
	I1101 00:08:42.541470   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:42.541731   26955 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:08:42.541757   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:08:42.541883   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHPort
	I1101 00:08:42.542073   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:08:42.542189   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHUsername
	I1101 00:08:42.542310   26955 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483/id_rsa Username:docker}
	I1101 00:08:42.554676   26955 round_trippers.go:574] Response Status: 200 OK in 35 milliseconds
	I1101 00:08:42.554698   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:42.554708   26955 round_trippers.go:580]     Content-Length: 291
	I1101 00:08:42.554717   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:42 GMT
	I1101 00:08:42.554725   26955 round_trippers.go:580]     Audit-Id: c43894c4-0da9-496e-93ac-064775b7fcdb
	I1101 00:08:42.554733   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:42.554741   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:42.554757   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:42.554767   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:42.555113   26955 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"21004493-8bb6-43e9-8ba2-65d98d570b24","resourceVersion":"334","creationTimestamp":"2023-11-01T00:08:30Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1101 00:08:42.555271   26955 round_trippers.go:463] GET https://192.168.39.130:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1101 00:08:42.555280   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:42.555287   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:42.555296   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:42.598712   26955 round_trippers.go:574] Response Status: 200 OK in 43 milliseconds
	I1101 00:08:42.598733   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:42.598740   26955 round_trippers.go:580]     Audit-Id: 2ba19faf-fa83-41ee-a1cc-5398ea69ca0a
	I1101 00:08:42.598745   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:42.598750   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:42.598755   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:42.598760   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:42.598765   26955 round_trippers.go:580]     Content-Length: 291
	I1101 00:08:42.598770   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:42 GMT
	I1101 00:08:42.598838   26955 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"21004493-8bb6-43e9-8ba2-65d98d570b24","resourceVersion":"334","creationTimestamp":"2023-11-01T00:08:30Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1101 00:08:42.598966   26955 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-600483" context rescaled to 1 replicas
	I1101 00:08:42.599002   26955 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.130 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 00:08:42.600909   26955 out.go:177] * Verifying Kubernetes components...
	I1101 00:08:42.602338   26955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 00:08:42.683180   26955 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 00:08:42.684710   26955 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 00:08:42.782903   26955 command_runner.go:130] > apiVersion: v1
	I1101 00:08:42.782923   26955 command_runner.go:130] > data:
	I1101 00:08:42.782931   26955 command_runner.go:130] >   Corefile: |
	I1101 00:08:42.782935   26955 command_runner.go:130] >     .:53 {
	I1101 00:08:42.782939   26955 command_runner.go:130] >         errors
	I1101 00:08:42.782944   26955 command_runner.go:130] >         health {
	I1101 00:08:42.782952   26955 command_runner.go:130] >            lameduck 5s
	I1101 00:08:42.782956   26955 command_runner.go:130] >         }
	I1101 00:08:42.782965   26955 command_runner.go:130] >         ready
	I1101 00:08:42.782975   26955 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1101 00:08:42.782984   26955 command_runner.go:130] >            pods insecure
	I1101 00:08:42.782999   26955 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1101 00:08:42.783013   26955 command_runner.go:130] >            ttl 30
	I1101 00:08:42.783019   26955 command_runner.go:130] >         }
	I1101 00:08:42.783026   26955 command_runner.go:130] >         prometheus :9153
	I1101 00:08:42.783031   26955 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1101 00:08:42.783038   26955 command_runner.go:130] >            max_concurrent 1000
	I1101 00:08:42.783042   26955 command_runner.go:130] >         }
	I1101 00:08:42.783049   26955 command_runner.go:130] >         cache 30
	I1101 00:08:42.783054   26955 command_runner.go:130] >         loop
	I1101 00:08:42.783060   26955 command_runner.go:130] >         reload
	I1101 00:08:42.783067   26955 command_runner.go:130] >         loadbalance
	I1101 00:08:42.783074   26955 command_runner.go:130] >     }
	I1101 00:08:42.783080   26955 command_runner.go:130] > kind: ConfigMap
	I1101 00:08:42.783097   26955 command_runner.go:130] > metadata:
	I1101 00:08:42.783106   26955 command_runner.go:130] >   creationTimestamp: "2023-11-01T00:08:30Z"
	I1101 00:08:42.783113   26955 command_runner.go:130] >   name: coredns
	I1101 00:08:42.783121   26955 command_runner.go:130] >   namespace: kube-system
	I1101 00:08:42.783128   26955 command_runner.go:130] >   resourceVersion: "225"
	I1101 00:08:42.783139   26955 command_runner.go:130] >   uid: 31ab598b-b8d9-4371-84e5-236ff729854b
	I1101 00:08:42.785670   26955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 00:08:42.785889   26955 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 00:08:42.786181   26955 kapi.go:59] client config for multinode-600483: &rest.Config{Host:"https://192.168.39.130:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/client.crt", KeyFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/client.key", CAFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 00:08:42.786435   26955 node_ready.go:35] waiting up to 6m0s for node "multinode-600483" to be "Ready" ...
	I1101 00:08:42.786511   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:08:42.786522   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:42.786534   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:42.786548   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:42.812229   26955 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I1101 00:08:42.812259   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:42.812271   26955 round_trippers.go:580]     Audit-Id: d212d221-f49a-4130-aa8a-b6df347e78ad
	I1101 00:08:42.812280   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:42.812286   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:42.812292   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:42.812297   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:42.812302   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:42 GMT
	I1101 00:08:42.812399   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"300","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 6100 chars]
	I1101 00:08:42.812948   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:08:42.812960   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:42.812967   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:42.812974   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:42.891949   26955 round_trippers.go:574] Response Status: 200 OK in 78 milliseconds
	I1101 00:08:42.891977   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:42.891985   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:42.891990   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:42 GMT
	I1101 00:08:42.891995   26955 round_trippers.go:580]     Audit-Id: 689e4339-c336-4c7a-bbb4-c12a772f8d85
	I1101 00:08:42.892000   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:42.892006   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:42.892011   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:42.899806   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"300","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 6100 chars]
	I1101 00:08:43.400914   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:08:43.400940   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:43.400948   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:43.400954   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:43.404801   26955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:08:43.404825   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:43.404833   26955 round_trippers.go:580]     Audit-Id: 524d3485-c6e4-4222-ae5a-3131a8a5064f
	I1101 00:08:43.404840   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:43.404848   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:43.404856   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:43.404865   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:43.404877   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:43 GMT
	I1101 00:08:43.404979   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"300","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 6100 chars]
	I1101 00:08:43.465258   26955 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1101 00:08:43.467977   26955 main.go:141] libmachine: Making call to close driver server
	I1101 00:08:43.468004   26955 main.go:141] libmachine: (multinode-600483) Calling .Close
	I1101 00:08:43.468412   26955 main.go:141] libmachine: (multinode-600483) DBG | Closing plugin on server side
	I1101 00:08:43.468420   26955 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:08:43.468439   26955 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:08:43.468450   26955 main.go:141] libmachine: Making call to close driver server
	I1101 00:08:43.468469   26955 main.go:141] libmachine: (multinode-600483) Calling .Close
	I1101 00:08:43.468714   26955 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:08:43.468733   26955 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:08:43.468790   26955 main.go:141] libmachine: (multinode-600483) DBG | Closing plugin on server side
	I1101 00:08:43.468833   26955 round_trippers.go:463] GET https://192.168.39.130:8443/apis/storage.k8s.io/v1/storageclasses
	I1101 00:08:43.468845   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:43.468856   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:43.468868   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:43.472056   26955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:08:43.472087   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:43.472097   26955 round_trippers.go:580]     Content-Length: 1273
	I1101 00:08:43.472104   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:43 GMT
	I1101 00:08:43.472113   26955 round_trippers.go:580]     Audit-Id: 66fd8d45-b81b-4217-b935-277dd0cf026f
	I1101 00:08:43.472121   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:43.472130   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:43.472140   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:43.472153   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:43.472229   26955 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"358"},"items":[{"metadata":{"name":"standard","uid":"a96e3748-45be-48d5-a28f-d5accddd8509","resourceVersion":"358","creationTimestamp":"2023-11-01T00:08:43Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-01T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1101 00:08:43.472587   26955 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"a96e3748-45be-48d5-a28f-d5accddd8509","resourceVersion":"358","creationTimestamp":"2023-11-01T00:08:43Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-01T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1101 00:08:43.472644   26955 round_trippers.go:463] PUT https://192.168.39.130:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1101 00:08:43.472656   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:43.472670   26955 round_trippers.go:473]     Content-Type: application/json
	I1101 00:08:43.472684   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:43.472697   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:43.476870   26955 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1101 00:08:43.476896   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:43.476905   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:43.476912   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:43.476919   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:43.476927   26955 round_trippers.go:580]     Content-Length: 1220
	I1101 00:08:43.476934   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:43 GMT
	I1101 00:08:43.476943   26955 round_trippers.go:580]     Audit-Id: 44ba45a5-2aae-4da6-8bde-0a4ddcf9b8e8
	I1101 00:08:43.476954   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:43.477009   26955 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"a96e3748-45be-48d5-a28f-d5accddd8509","resourceVersion":"358","creationTimestamp":"2023-11-01T00:08:43Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-01T00:08:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1101 00:08:43.477163   26955 main.go:141] libmachine: Making call to close driver server
	I1101 00:08:43.477183   26955 main.go:141] libmachine: (multinode-600483) Calling .Close
	I1101 00:08:43.477462   26955 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:08:43.477488   26955 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:08:43.477493   26955 main.go:141] libmachine: (multinode-600483) DBG | Closing plugin on server side
	I1101 00:08:43.658595   26955 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1101 00:08:43.672306   26955 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1101 00:08:43.684347   26955 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1101 00:08:43.699667   26955 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1101 00:08:43.710414   26955 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1101 00:08:43.730031   26955 command_runner.go:130] > pod/storage-provisioner created
	I1101 00:08:43.732114   26955 command_runner.go:130] > configmap/coredns replaced
	I1101 00:08:43.732131   26955 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.047389109s)
	I1101 00:08:43.732147   26955 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1101 00:08:43.732166   26955 main.go:141] libmachine: Making call to close driver server
	I1101 00:08:43.732178   26955 main.go:141] libmachine: (multinode-600483) Calling .Close
	I1101 00:08:43.732469   26955 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:08:43.732485   26955 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:08:43.732494   26955 main.go:141] libmachine: Making call to close driver server
	I1101 00:08:43.732501   26955 main.go:141] libmachine: (multinode-600483) Calling .Close
	I1101 00:08:43.732690   26955 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:08:43.732707   26955 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:08:43.735884   26955 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1101 00:08:43.737452   26955 addons.go:502] enable addons completed in 1.260683344s: enabled=[default-storageclass storage-provisioner]
	I1101 00:08:43.900435   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:08:43.900456   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:43.900464   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:43.900470   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:43.904165   26955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:08:43.904189   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:43.904200   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:43.904210   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:43.904218   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:43 GMT
	I1101 00:08:43.904227   26955 round_trippers.go:580]     Audit-Id: 9556c0c2-a7ac-4509-b649-de3341866804
	I1101 00:08:43.904236   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:43.904246   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:43.904360   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"300","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 6100 chars]
	I1101 00:08:44.400921   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:08:44.400944   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:44.400953   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:44.400959   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:44.404105   26955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:08:44.404129   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:44.404138   26955 round_trippers.go:580]     Audit-Id: a40e6f1c-83f6-4c18-be5b-658bba00da6e
	I1101 00:08:44.404146   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:44.404154   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:44.404161   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:44.404168   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:44.404176   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:44 GMT
	I1101 00:08:44.404413   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"300","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 6100 chars]
	I1101 00:08:44.901176   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:08:44.901205   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:44.901217   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:44.901225   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:44.904567   26955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:08:44.904607   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:44.904617   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:44 GMT
	I1101 00:08:44.904627   26955 round_trippers.go:580]     Audit-Id: 5f0dba98-808a-4b94-97a9-0b53ecd3f1c1
	I1101 00:08:44.904636   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:44.904643   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:44.904649   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:44.904655   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:44.904807   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"300","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 6100 chars]
	I1101 00:08:44.905092   26955 node_ready.go:58] node "multinode-600483" has status "Ready":"False"
	I1101 00:08:45.401243   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:08:45.401265   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:45.401273   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:45.401280   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:45.404323   26955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:08:45.404340   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:45.404346   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:45.404355   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:45 GMT
	I1101 00:08:45.404363   26955 round_trippers.go:580]     Audit-Id: 1b976dcd-ed2c-4558-8ea3-95c0bced209b
	I1101 00:08:45.404372   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:45.404381   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:45.404393   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:45.405815   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"300","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 6100 chars]
	I1101 00:08:45.901163   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:08:45.901183   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:45.901191   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:45.901197   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:45.904001   26955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:08:45.904017   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:45.904023   26955 round_trippers.go:580]     Audit-Id: d7bda614-3727-49a1-b277-038cc5864a1a
	I1101 00:08:45.904029   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:45.904034   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:45.904039   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:45.904062   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:45.904075   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:45 GMT
	I1101 00:08:45.904356   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"300","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 6100 chars]
	I1101 00:08:46.401080   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:08:46.401104   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:46.401114   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:46.401121   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:46.403295   26955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:08:46.403317   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:46.403326   26955 round_trippers.go:580]     Audit-Id: 83fda448-7b81-4da0-b506-0f919d8f5938
	I1101 00:08:46.403334   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:46.403342   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:46.403351   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:46.403358   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:46.403366   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:46 GMT
	I1101 00:08:46.403602   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"300","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 6100 chars]
	I1101 00:08:46.901188   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:08:46.901214   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:46.901222   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:46.901228   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:46.906356   26955 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1101 00:08:46.906381   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:46.906389   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:46 GMT
	I1101 00:08:46.906394   26955 round_trippers.go:580]     Audit-Id: 24af8c80-3657-481a-aa9f-acb9cf89665b
	I1101 00:08:46.906400   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:46.906405   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:46.906410   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:46.906419   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:46.906518   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"300","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 6100 chars]
	I1101 00:08:46.906819   26955 node_ready.go:58] node "multinode-600483" has status "Ready":"False"
	I1101 00:08:47.401099   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:08:47.401123   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:47.401131   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:47.401137   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:47.403847   26955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:08:47.403867   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:47.403874   26955 round_trippers.go:580]     Audit-Id: 479cc10e-7fac-4ded-8d48-a292f231f94c
	I1101 00:08:47.403879   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:47.403884   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:47.403889   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:47.403895   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:47.403900   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:47 GMT
	I1101 00:08:47.404127   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"300","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 6100 chars]
	I1101 00:08:47.901264   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:08:47.901286   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:47.901294   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:47.901300   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:47.908684   26955 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1101 00:08:47.908705   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:47.908712   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:47.908718   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:47.908722   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:47.908728   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:47.908733   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:47 GMT
	I1101 00:08:47.908738   26955 round_trippers.go:580]     Audit-Id: 2c4a4764-14e9-46b4-b683-c9cc7b1bc897
	I1101 00:08:47.908839   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"382","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 5906 chars]
	I1101 00:08:47.909242   26955 node_ready.go:49] node "multinode-600483" has status "Ready":"True"
	I1101 00:08:47.909261   26955 node_ready.go:38] duration metric: took 5.122807118s waiting for node "multinode-600483" to be "Ready" ...
	I1101 00:08:47.909273   26955 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 00:08:47.909342   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods
	I1101 00:08:47.909355   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:47.909367   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:47.909384   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:47.916558   26955 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1101 00:08:47.916579   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:47.916586   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:47.916591   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:47.916596   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:47.916601   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:47 GMT
	I1101 00:08:47.916606   26955 round_trippers.go:580]     Audit-Id: 4009eb1a-b776-4811-a45f-e66622dbda58
	I1101 00:08:47.916613   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:47.918844   26955 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"388"},"items":[{"metadata":{"name":"coredns-5dd5756b68-rpvvn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d8ab0ebb-aa1f-4143-b987-6c1ae065954a","resourceVersion":"386","creationTimestamp":"2023-11-01T00:08:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15779dee-f1e7-4836-aba2-2d57728c2309","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15779dee-f1e7-4836-aba2-2d57728c2309\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53917 chars]
	I1101 00:08:47.921711   26955 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rpvvn" in "kube-system" namespace to be "Ready" ...
	I1101 00:08:47.921788   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rpvvn
	I1101 00:08:47.921797   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:47.921813   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:47.921822   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:47.924292   26955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:08:47.924313   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:47.924324   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:47 GMT
	I1101 00:08:47.924332   26955 round_trippers.go:580]     Audit-Id: 6b664028-0612-4b9d-9672-db4e7c34e395
	I1101 00:08:47.924337   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:47.924342   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:47.924348   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:47.924353   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:47.924487   26955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rpvvn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d8ab0ebb-aa1f-4143-b987-6c1ae065954a","resourceVersion":"386","creationTimestamp":"2023-11-01T00:08:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15779dee-f1e7-4836-aba2-2d57728c2309","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15779dee-f1e7-4836-aba2-2d57728c2309\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1101 00:08:47.924914   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:08:47.924927   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:47.924934   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:47.924940   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:47.926837   26955 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1101 00:08:47.926852   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:47.926863   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:47 GMT
	I1101 00:08:47.926872   26955 round_trippers.go:580]     Audit-Id: 9b849328-71e3-4495-84cf-20d459ec8871
	I1101 00:08:47.926881   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:47.926887   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:47.926892   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:47.926900   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:47.927253   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"382","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 5906 chars]
	I1101 00:08:47.927558   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rpvvn
	I1101 00:08:47.927568   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:47.927575   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:47.927581   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:47.929606   26955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:08:47.929618   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:47.929623   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:47.929632   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:47 GMT
	I1101 00:08:47.929641   26955 round_trippers.go:580]     Audit-Id: 3ed1d6eb-2c0f-4243-a73f-ccfc4b42befe
	I1101 00:08:47.929650   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:47.929659   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:47.929667   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:47.929785   26955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rpvvn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d8ab0ebb-aa1f-4143-b987-6c1ae065954a","resourceVersion":"386","creationTimestamp":"2023-11-01T00:08:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15779dee-f1e7-4836-aba2-2d57728c2309","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15779dee-f1e7-4836-aba2-2d57728c2309\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1101 00:08:47.930140   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:08:47.930152   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:47.930159   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:47.930165   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:47.932427   26955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:08:47.932442   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:47.932448   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:47.932453   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:47.932458   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:47 GMT
	I1101 00:08:47.932463   26955 round_trippers.go:580]     Audit-Id: 75620f2a-c77c-4e19-b2a9-2b218ffe4fa6
	I1101 00:08:47.932468   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:47.932473   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:47.933238   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"382","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 5906 chars]
	I1101 00:08:48.434350   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rpvvn
	I1101 00:08:48.434380   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:48.434394   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:48.434404   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:48.437084   26955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:08:48.437109   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:48.437120   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:48.437126   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:48 GMT
	I1101 00:08:48.437131   26955 round_trippers.go:580]     Audit-Id: e3c287eb-82a5-4e8d-9f44-e1d26bbad1dc
	I1101 00:08:48.437137   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:48.437142   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:48.437147   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:48.437417   26955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rpvvn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d8ab0ebb-aa1f-4143-b987-6c1ae065954a","resourceVersion":"386","creationTimestamp":"2023-11-01T00:08:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15779dee-f1e7-4836-aba2-2d57728c2309","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15779dee-f1e7-4836-aba2-2d57728c2309\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1101 00:08:48.438005   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:08:48.438026   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:48.438036   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:48.438044   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:48.442215   26955 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1101 00:08:48.442231   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:48.442237   26955 round_trippers.go:580]     Audit-Id: 7dccbf1e-6ee2-43cb-84c1-49d08a904995
	I1101 00:08:48.442243   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:48.442248   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:48.442253   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:48.442259   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:48.442264   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:48 GMT
	I1101 00:08:48.442721   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"382","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 5906 chars]
	I1101 00:08:48.934430   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rpvvn
	I1101 00:08:48.934454   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:48.934461   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:48.934467   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:48.937261   26955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:08:48.937281   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:48.937291   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:48.937298   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:48.937306   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:48.937314   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:48.937326   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:48 GMT
	I1101 00:08:48.937335   26955 round_trippers.go:580]     Audit-Id: 2ad2d096-d14e-4eb2-b0c2-57db5fb5eed3
	I1101 00:08:48.937545   26955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rpvvn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d8ab0ebb-aa1f-4143-b987-6c1ae065954a","resourceVersion":"386","creationTimestamp":"2023-11-01T00:08:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15779dee-f1e7-4836-aba2-2d57728c2309","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15779dee-f1e7-4836-aba2-2d57728c2309\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1101 00:08:48.937981   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:08:48.937995   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:48.938002   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:48.938008   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:48.940408   26955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:08:48.940422   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:48.940431   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:48.940438   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:48.940446   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:48 GMT
	I1101 00:08:48.940455   26955 round_trippers.go:580]     Audit-Id: 6f2afb1e-7aed-46d6-8dee-eb9ae61ca0c0
	I1101 00:08:48.940466   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:48.940476   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:48.940674   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"382","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 5906 chars]
	I1101 00:08:49.434393   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rpvvn
	I1101 00:08:49.434418   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:49.434429   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:49.434437   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:49.437498   26955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:08:49.437522   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:49.437533   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:49 GMT
	I1101 00:08:49.437542   26955 round_trippers.go:580]     Audit-Id: add3692e-812f-42b4-a0f4-ad20a13dbb99
	I1101 00:08:49.437550   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:49.437555   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:49.437561   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:49.437566   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:49.437697   26955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rpvvn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d8ab0ebb-aa1f-4143-b987-6c1ae065954a","resourceVersion":"386","creationTimestamp":"2023-11-01T00:08:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15779dee-f1e7-4836-aba2-2d57728c2309","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15779dee-f1e7-4836-aba2-2d57728c2309\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1101 00:08:49.438097   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:08:49.438107   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:49.438114   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:49.438120   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:49.441851   26955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:08:49.441875   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:49.441885   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:49.441894   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:49 GMT
	I1101 00:08:49.441902   26955 round_trippers.go:580]     Audit-Id: b81cbf15-4352-445f-bf50-d719687ef263
	I1101 00:08:49.441910   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:49.441919   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:49.441928   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:49.442137   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"382","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 5906 chars]
	I1101 00:08:49.933801   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rpvvn
	I1101 00:08:49.933828   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:49.933839   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:49.933850   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:49.936862   26955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:08:49.936890   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:49.936900   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:49.936909   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:49 GMT
	I1101 00:08:49.936917   26955 round_trippers.go:580]     Audit-Id: e6674815-841e-4cad-94d8-73b894512e57
	I1101 00:08:49.936929   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:49.936938   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:49.936951   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:49.937066   26955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rpvvn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d8ab0ebb-aa1f-4143-b987-6c1ae065954a","resourceVersion":"401","creationTimestamp":"2023-11-01T00:08:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15779dee-f1e7-4836-aba2-2d57728c2309","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15779dee-f1e7-4836-aba2-2d57728c2309\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1101 00:08:49.937559   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:08:49.937575   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:49.937583   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:49.937588   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:49.940157   26955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:08:49.940177   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:49.940186   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:49.940194   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:49 GMT
	I1101 00:08:49.940201   26955 round_trippers.go:580]     Audit-Id: d4ab3028-15bc-455b-981c-7fd70375e179
	I1101 00:08:49.940208   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:49.940217   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:49.940227   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:49.940435   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"382","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 5906 chars]
	I1101 00:08:49.940782   26955 pod_ready.go:92] pod "coredns-5dd5756b68-rpvvn" in "kube-system" namespace has status "Ready":"True"
	I1101 00:08:49.940800   26955 pod_ready.go:81] duration metric: took 2.01906612s waiting for pod "coredns-5dd5756b68-rpvvn" in "kube-system" namespace to be "Ready" ...
	I1101 00:08:49.940811   26955 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:08:49.940873   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-600483
	I1101 00:08:49.940884   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:49.940891   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:49.940899   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:49.942977   26955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:08:49.942998   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:49.943007   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:49.943017   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:49.943025   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:49 GMT
	I1101 00:08:49.943033   26955 round_trippers.go:580]     Audit-Id: 8ab2e09c-afc2-4d75-b2ec-e4b0101f02e3
	I1101 00:08:49.943040   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:49.943048   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:49.943215   26955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-600483","namespace":"kube-system","uid":"c612ebac-fa1d-474a-b8cd-5e922a5f76dd","resourceVersion":"264","creationTimestamp":"2023-11-01T00:08:30Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.130:2379","kubernetes.io/config.hash":"5629fb0a0414e85632f97c416152ffbb","kubernetes.io/config.mirror":"5629fb0a0414e85632f97c416152ffbb","kubernetes.io/config.seen":"2023-11-01T00:08:30.293496672Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1101 00:08:49.943614   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:08:49.943627   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:49.943634   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:49.943639   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:49.945747   26955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:08:49.945766   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:49.945776   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:49.945785   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:49.945792   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:49.945801   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:49 GMT
	I1101 00:08:49.945808   26955 round_trippers.go:580]     Audit-Id: 28a80a1f-4811-47e2-aa10-f450afd8dd90
	I1101 00:08:49.945835   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:49.946112   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"382","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 5906 chars]
	I1101 00:08:49.946427   26955 pod_ready.go:92] pod "etcd-multinode-600483" in "kube-system" namespace has status "Ready":"True"
	I1101 00:08:49.946445   26955 pod_ready.go:81] duration metric: took 5.619424ms waiting for pod "etcd-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:08:49.946461   26955 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:08:49.946515   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-600483
	I1101 00:08:49.946527   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:49.946538   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:49.946548   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:49.948771   26955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:08:49.948787   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:49.948793   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:49 GMT
	I1101 00:08:49.948799   26955 round_trippers.go:580]     Audit-Id: ef29f0d9-8534-447e-b3b4-a550f4933f6b
	I1101 00:08:49.948804   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:49.948809   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:49.948814   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:49.948822   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:49.949034   26955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-600483","namespace":"kube-system","uid":"bd94a63a-62c2-4654-aaf0-2e9df086b168","resourceVersion":"266","creationTimestamp":"2023-11-01T00:08:30Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.130:8443","kubernetes.io/config.hash":"99a9cda13526c350638742a7c7b2ba52","kubernetes.io/config.mirror":"99a9cda13526c350638742a7c7b2ba52","kubernetes.io/config.seen":"2023-11-01T00:08:30.293497612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1101 00:08:49.949423   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:08:49.949444   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:49.949452   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:49.949465   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:49.951279   26955 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1101 00:08:49.951298   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:49.951307   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:49.951316   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:49.951324   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:49.951332   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:49.951340   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:49 GMT
	I1101 00:08:49.951353   26955 round_trippers.go:580]     Audit-Id: 1fa3cf48-195f-4238-ba6c-84fc1d5117dd
	I1101 00:08:49.951487   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"382","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 5906 chars]
	I1101 00:08:49.951806   26955 pod_ready.go:92] pod "kube-apiserver-multinode-600483" in "kube-system" namespace has status "Ready":"True"
	I1101 00:08:49.951820   26955 pod_ready.go:81] duration metric: took 5.350492ms waiting for pod "kube-apiserver-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:08:49.951828   26955 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:08:49.951878   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-600483
	I1101 00:08:49.951886   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:49.951893   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:49.951900   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:49.953763   26955 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1101 00:08:49.953781   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:49.953790   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:49.953798   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:49.953806   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:49 GMT
	I1101 00:08:49.953815   26955 round_trippers.go:580]     Audit-Id: 800d486d-0174-4dc9-8591-df07b69aa53d
	I1101 00:08:49.953828   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:49.953842   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:49.954163   26955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-600483","namespace":"kube-system","uid":"9dd41877-c6ea-4591-90e1-632a234ffcf6","resourceVersion":"289","creationTimestamp":"2023-11-01T00:08:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f2b1fcba8b34b1f65e600fae0bd4374a","kubernetes.io/config.mirror":"f2b1fcba8b34b1f65e600fae0bd4374a","kubernetes.io/config.seen":"2023-11-01T00:08:20.448799328Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1101 00:08:49.954522   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:08:49.954542   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:49.954552   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:49.954568   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:49.956299   26955 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1101 00:08:49.956313   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:49.956319   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:49.956325   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:49 GMT
	I1101 00:08:49.956331   26955 round_trippers.go:580]     Audit-Id: b545e874-48b0-4ada-86ea-7e0ac6a744df
	I1101 00:08:49.956339   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:49.956354   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:49.956362   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:49.956487   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"382","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 5906 chars]
	I1101 00:08:49.956762   26955 pod_ready.go:92] pod "kube-controller-manager-multinode-600483" in "kube-system" namespace has status "Ready":"True"
	I1101 00:08:49.956777   26955 pod_ready.go:81] duration metric: took 4.941633ms waiting for pod "kube-controller-manager-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:08:49.956789   26955 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tq28b" in "kube-system" namespace to be "Ready" ...
	I1101 00:08:50.101554   26955 request.go:629] Waited for 144.706994ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tq28b
	I1101 00:08:50.101654   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tq28b
	I1101 00:08:50.101661   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:50.101671   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:50.101684   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:50.104939   26955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:08:50.104970   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:50.104979   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:50 GMT
	I1101 00:08:50.104986   26955 round_trippers.go:580]     Audit-Id: b45654b1-7bad-4dde-ac55-bc93a410e1da
	I1101 00:08:50.104993   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:50.105000   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:50.105007   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:50.105013   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:50.105189   26955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tq28b","generateName":"kube-proxy-","namespace":"kube-system","uid":"9534d8b8-4536-4a0a-8af5-440e6871a85f","resourceVersion":"372","creationTimestamp":"2023-11-01T00:08:42Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2d674cb3-a003-4ca9-a8b5-a283ae64b7c6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d674cb3-a003-4ca9-a8b5-a283ae64b7c6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5517 chars]
	I1101 00:08:50.302045   26955 request.go:629] Waited for 196.379889ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:08:50.302130   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:08:50.302136   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:50.302144   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:50.302152   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:50.304813   26955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:08:50.304842   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:50.304852   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:50 GMT
	I1101 00:08:50.304859   26955 round_trippers.go:580]     Audit-Id: 9872521d-8c7d-4c5c-bc46-55c64a696f37
	I1101 00:08:50.304866   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:50.304873   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:50.304889   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:50.304898   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:50.305160   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"382","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 5906 chars]
	I1101 00:08:50.305471   26955 pod_ready.go:92] pod "kube-proxy-tq28b" in "kube-system" namespace has status "Ready":"True"
	I1101 00:08:50.305489   26955 pod_ready.go:81] duration metric: took 348.69191ms waiting for pod "kube-proxy-tq28b" in "kube-system" namespace to be "Ready" ...
	I1101 00:08:50.305502   26955 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:08:50.502000   26955 request.go:629] Waited for 196.432522ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-600483
	I1101 00:08:50.502065   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-600483
	I1101 00:08:50.502079   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:50.502088   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:50.502098   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:50.505248   26955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:08:50.505271   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:50.505284   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:50.505291   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:50.505299   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:50.505306   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:50 GMT
	I1101 00:08:50.505314   26955 round_trippers.go:580]     Audit-Id: e1516cc8-39a3-4eb5-bc42-4801bac335e2
	I1101 00:08:50.505332   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:50.505466   26955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-600483","namespace":"kube-system","uid":"9cdd0be5-035a-49f5-8796-831ebde28bf0","resourceVersion":"295","creationTimestamp":"2023-11-01T00:08:30Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"01c4e8f68a00a3553dcff3388cb56149","kubernetes.io/config.mirror":"01c4e8f68a00a3553dcff3388cb56149","kubernetes.io/config.seen":"2023-11-01T00:08:30.293495470Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1101 00:08:50.702181   26955 request.go:629] Waited for 196.327889ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:08:50.702246   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:08:50.702251   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:50.702258   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:50.702264   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:50.705104   26955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:08:50.705129   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:50.705137   26955 round_trippers.go:580]     Audit-Id: 10b6a299-00dd-4dba-a5a0-c73992b83209
	I1101 00:08:50.705143   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:50.705148   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:50.705153   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:50.705158   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:50.705163   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:50 GMT
	I1101 00:08:50.705297   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"382","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 5906 chars]
	I1101 00:08:50.705579   26955 pod_ready.go:92] pod "kube-scheduler-multinode-600483" in "kube-system" namespace has status "Ready":"True"
	I1101 00:08:50.705592   26955 pod_ready.go:81] duration metric: took 400.08289ms waiting for pod "kube-scheduler-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:08:50.705601   26955 pod_ready.go:38] duration metric: took 2.796315211s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 00:08:50.705615   26955 api_server.go:52] waiting for apiserver process to appear ...
	I1101 00:08:50.705660   26955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:08:50.720045   26955 command_runner.go:130] > 1066
	I1101 00:08:50.720263   26955 api_server.go:72] duration metric: took 8.12123353s to wait for apiserver process to appear ...
	I1101 00:08:50.720281   26955 api_server.go:88] waiting for apiserver healthz status ...
	I1101 00:08:50.720297   26955 api_server.go:253] Checking apiserver healthz at https://192.168.39.130:8443/healthz ...
	I1101 00:08:50.726260   26955 api_server.go:279] https://192.168.39.130:8443/healthz returned 200:
	ok
	I1101 00:08:50.726317   26955 round_trippers.go:463] GET https://192.168.39.130:8443/version
	I1101 00:08:50.726322   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:50.726330   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:50.726336   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:50.727579   26955 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1101 00:08:50.727599   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:50.727610   26955 round_trippers.go:580]     Audit-Id: 4602f8d5-29c1-4ce3-88b9-47c3bbbf4b2c
	I1101 00:08:50.727619   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:50.727627   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:50.727635   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:50.727643   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:50.727655   26955 round_trippers.go:580]     Content-Length: 264
	I1101 00:08:50.727663   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:50 GMT
	I1101 00:08:50.727691   26955 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1101 00:08:50.727785   26955 api_server.go:141] control plane version: v1.28.3
	I1101 00:08:50.727802   26955 api_server.go:131] duration metric: took 7.514785ms to wait for apiserver health ...
	I1101 00:08:50.727812   26955 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 00:08:50.902240   26955 request.go:629] Waited for 174.373924ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods
	I1101 00:08:50.902299   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods
	I1101 00:08:50.902304   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:50.902320   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:50.902327   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:50.906966   26955 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1101 00:08:50.906994   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:50.907004   26955 round_trippers.go:580]     Audit-Id: a885479d-51c8-4347-928e-133846d384ed
	I1101 00:08:50.907013   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:50.907021   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:50.907029   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:50.907037   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:50.907045   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:50 GMT
	I1101 00:08:50.907810   26955 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"408"},"items":[{"metadata":{"name":"coredns-5dd5756b68-rpvvn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d8ab0ebb-aa1f-4143-b987-6c1ae065954a","resourceVersion":"401","creationTimestamp":"2023-11-01T00:08:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15779dee-f1e7-4836-aba2-2d57728c2309","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15779dee-f1e7-4836-aba2-2d57728c2309\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53995 chars]
	I1101 00:08:50.909686   26955 system_pods.go:59] 8 kube-system pods found
	I1101 00:08:50.909711   26955 system_pods.go:61] "coredns-5dd5756b68-rpvvn" [d8ab0ebb-aa1f-4143-b987-6c1ae065954a] Running
	I1101 00:08:50.909716   26955 system_pods.go:61] "etcd-multinode-600483" [c612ebac-fa1d-474a-b8cd-5e922a5f76dd] Running
	I1101 00:08:50.909720   26955 system_pods.go:61] "kindnet-l75r4" [abfa8ec3-0565-4927-a07c-9fed1240d270] Running
	I1101 00:08:50.909724   26955 system_pods.go:61] "kube-apiserver-multinode-600483" [bd94a63a-62c2-4654-aaf0-2e9df086b168] Running
	I1101 00:08:50.909728   26955 system_pods.go:61] "kube-controller-manager-multinode-600483" [9dd41877-c6ea-4591-90e1-632a234ffcf6] Running
	I1101 00:08:50.909732   26955 system_pods.go:61] "kube-proxy-tq28b" [9534d8b8-4536-4a0a-8af5-440e6871a85f] Running
	I1101 00:08:50.909735   26955 system_pods.go:61] "kube-scheduler-multinode-600483" [9cdd0be5-035a-49f5-8796-831ebde28bf0] Running
	I1101 00:08:50.909739   26955 system_pods.go:61] "storage-provisioner" [a67f136b-7645-4eb9-9568-52e3ab06d66e] Running
	I1101 00:08:50.909746   26955 system_pods.go:74] duration metric: took 181.929031ms to wait for pod list to return data ...
	I1101 00:08:50.909756   26955 default_sa.go:34] waiting for default service account to be created ...
	I1101 00:08:51.102210   26955 request.go:629] Waited for 192.397642ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/namespaces/default/serviceaccounts
	I1101 00:08:51.102278   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/default/serviceaccounts
	I1101 00:08:51.102284   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:51.102291   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:51.102299   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:51.105262   26955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:08:51.105286   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:51.105297   26955 round_trippers.go:580]     Content-Length: 261
	I1101 00:08:51.105306   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:51 GMT
	I1101 00:08:51.105314   26955 round_trippers.go:580]     Audit-Id: 951d0030-1085-4e53-8c07-32c5e112957e
	I1101 00:08:51.105320   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:51.105326   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:51.105331   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:51.105336   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:51.105357   26955 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"408"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"04a1b135-6f95-4452-a7aa-e2cd772cc1b9","resourceVersion":"301","creationTimestamp":"2023-11-01T00:08:42Z"}}]}
	I1101 00:08:51.105523   26955 default_sa.go:45] found service account: "default"
	I1101 00:08:51.105537   26955 default_sa.go:55] duration metric: took 195.776301ms for default service account to be created ...
	I1101 00:08:51.105544   26955 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 00:08:51.301996   26955 request.go:629] Waited for 196.391477ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods
	I1101 00:08:51.302050   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods
	I1101 00:08:51.302055   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:51.302063   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:51.302069   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:51.305748   26955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:08:51.305778   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:51.305788   26955 round_trippers.go:580]     Audit-Id: c53c2e93-9dce-419c-8cdc-88768bfa3e80
	I1101 00:08:51.305797   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:51.305805   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:51.305813   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:51.305821   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:51.305828   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:51 GMT
	I1101 00:08:51.306695   26955 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"408"},"items":[{"metadata":{"name":"coredns-5dd5756b68-rpvvn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d8ab0ebb-aa1f-4143-b987-6c1ae065954a","resourceVersion":"401","creationTimestamp":"2023-11-01T00:08:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15779dee-f1e7-4836-aba2-2d57728c2309","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15779dee-f1e7-4836-aba2-2d57728c2309\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53995 chars]
	I1101 00:08:51.308380   26955 system_pods.go:86] 8 kube-system pods found
	I1101 00:08:51.308408   26955 system_pods.go:89] "coredns-5dd5756b68-rpvvn" [d8ab0ebb-aa1f-4143-b987-6c1ae065954a] Running
	I1101 00:08:51.308422   26955 system_pods.go:89] "etcd-multinode-600483" [c612ebac-fa1d-474a-b8cd-5e922a5f76dd] Running
	I1101 00:08:51.308433   26955 system_pods.go:89] "kindnet-l75r4" [abfa8ec3-0565-4927-a07c-9fed1240d270] Running
	I1101 00:08:51.308445   26955 system_pods.go:89] "kube-apiserver-multinode-600483" [bd94a63a-62c2-4654-aaf0-2e9df086b168] Running
	I1101 00:08:51.308454   26955 system_pods.go:89] "kube-controller-manager-multinode-600483" [9dd41877-c6ea-4591-90e1-632a234ffcf6] Running
	I1101 00:08:51.308465   26955 system_pods.go:89] "kube-proxy-tq28b" [9534d8b8-4536-4a0a-8af5-440e6871a85f] Running
	I1101 00:08:51.308469   26955 system_pods.go:89] "kube-scheduler-multinode-600483" [9cdd0be5-035a-49f5-8796-831ebde28bf0] Running
	I1101 00:08:51.308473   26955 system_pods.go:89] "storage-provisioner" [a67f136b-7645-4eb9-9568-52e3ab06d66e] Running
	I1101 00:08:51.308480   26955 system_pods.go:126] duration metric: took 202.931615ms to wait for k8s-apps to be running ...
	I1101 00:08:51.308487   26955 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 00:08:51.308542   26955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 00:08:51.321898   26955 system_svc.go:56] duration metric: took 13.40279ms WaitForService to wait for kubelet.
	I1101 00:08:51.321919   26955 kubeadm.go:581] duration metric: took 8.722894464s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1101 00:08:51.321936   26955 node_conditions.go:102] verifying NodePressure condition ...
	I1101 00:08:51.501300   26955 request.go:629] Waited for 179.287666ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/nodes
	I1101 00:08:51.501369   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes
	I1101 00:08:51.501375   26955 round_trippers.go:469] Request Headers:
	I1101 00:08:51.501382   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:08:51.501388   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:08:51.504092   26955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:08:51.504125   26955 round_trippers.go:577] Response Headers:
	I1101 00:08:51.504132   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:08:51.504137   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:08:51 GMT
	I1101 00:08:51.504142   26955 round_trippers.go:580]     Audit-Id: 0b21687b-b700-481a-a6fc-005eace64009
	I1101 00:08:51.504147   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:08:51.504152   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:08:51.504157   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:08:51.504407   26955 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"408"},"items":[{"metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"382","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"manage
dFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1"," [truncated 5959 chars]
	I1101 00:08:51.504733   26955 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 00:08:51.504750   26955 node_conditions.go:123] node cpu capacity is 2
	I1101 00:08:51.504759   26955 node_conditions.go:105] duration metric: took 182.818958ms to run NodePressure ...
	I1101 00:08:51.504769   26955 start.go:228] waiting for startup goroutines ...
	I1101 00:08:51.504779   26955 start.go:233] waiting for cluster config update ...
	I1101 00:08:51.504787   26955 start.go:242] writing updated cluster config ...
	I1101 00:08:51.507036   26955 out.go:177] 
	I1101 00:08:51.508704   26955 config.go:182] Loaded profile config "multinode-600483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:08:51.508783   26955 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/config.json ...
	I1101 00:08:51.510518   26955 out.go:177] * Starting worker node multinode-600483-m02 in cluster multinode-600483
	I1101 00:08:51.511806   26955 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 00:08:51.511825   26955 cache.go:56] Caching tarball of preloaded images
	I1101 00:08:51.511904   26955 preload.go:174] Found /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 00:08:51.511915   26955 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1101 00:08:51.511988   26955 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/config.json ...
	I1101 00:08:51.512142   26955 start.go:365] acquiring machines lock for multinode-600483-m02: {Name:mk7aad88408c319111b9be8e59d9593a9e88374b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 00:08:51.512180   26955 start.go:369] acquired machines lock for "multinode-600483-m02" in 20.884µs
	I1101 00:08:51.512195   26955 start.go:93] Provisioning new machine with config: &{Name:multinode-600483 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.3 ClusterName:multinode-600483 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.130 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:
true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1101 00:08:51.512256   26955 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1101 00:08:51.513927   26955 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1101 00:08:51.513990   26955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1101 00:08:51.514013   26955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:08:51.527819   26955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45173
	I1101 00:08:51.528344   26955 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:08:51.528840   26955 main.go:141] libmachine: Using API Version  1
	I1101 00:08:51.528859   26955 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:08:51.529124   26955 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:08:51.529292   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetMachineName
	I1101 00:08:51.529416   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .DriverName
	I1101 00:08:51.529573   26955 start.go:159] libmachine.API.Create for "multinode-600483" (driver="kvm2")
	I1101 00:08:51.529592   26955 client.go:168] LocalClient.Create starting
	I1101 00:08:51.529628   26955 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem
	I1101 00:08:51.529662   26955 main.go:141] libmachine: Decoding PEM data...
	I1101 00:08:51.529676   26955 main.go:141] libmachine: Parsing certificate...
	I1101 00:08:51.529720   26955 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem
	I1101 00:08:51.529737   26955 main.go:141] libmachine: Decoding PEM data...
	I1101 00:08:51.529748   26955 main.go:141] libmachine: Parsing certificate...
	I1101 00:08:51.529764   26955 main.go:141] libmachine: Running pre-create checks...
	I1101 00:08:51.529772   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .PreCreateCheck
	I1101 00:08:51.529921   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetConfigRaw
	I1101 00:08:51.530310   26955 main.go:141] libmachine: Creating machine...
	I1101 00:08:51.530323   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .Create
	I1101 00:08:51.530446   26955 main.go:141] libmachine: (multinode-600483-m02) Creating KVM machine...
	I1101 00:08:51.531728   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | found existing default KVM network
	I1101 00:08:51.531847   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | found existing private KVM network mk-multinode-600483
	I1101 00:08:51.532014   26955 main.go:141] libmachine: (multinode-600483-m02) Setting up store path in /home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483-m02 ...
	I1101 00:08:51.532031   26955 main.go:141] libmachine: (multinode-600483-m02) Building disk image from file:///home/jenkins/minikube-integration/17486-7305/.minikube/cache/iso/amd64/minikube-v1.32.0-1698773592-17486-amd64.iso
	I1101 00:08:51.532153   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | I1101 00:08:51.532015   27326 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17486-7305/.minikube
	I1101 00:08:51.532216   26955 main.go:141] libmachine: (multinode-600483-m02) Downloading /home/jenkins/minikube-integration/17486-7305/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17486-7305/.minikube/cache/iso/amd64/minikube-v1.32.0-1698773592-17486-amd64.iso...
	I1101 00:08:51.730212   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | I1101 00:08:51.730071   27326 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483-m02/id_rsa...
	I1101 00:08:51.870419   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | I1101 00:08:51.870274   27326 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483-m02/multinode-600483-m02.rawdisk...
	I1101 00:08:51.870455   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | Writing magic tar header
	I1101 00:08:51.870472   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | Writing SSH key tar header
	I1101 00:08:51.870483   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | I1101 00:08:51.870381   27326 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483-m02 ...
	I1101 00:08:51.870499   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483-m02
	I1101 00:08:51.870511   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17486-7305/.minikube/machines
	I1101 00:08:51.870528   26955 main.go:141] libmachine: (multinode-600483-m02) Setting executable bit set on /home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483-m02 (perms=drwx------)
	I1101 00:08:51.870548   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17486-7305/.minikube
	I1101 00:08:51.870565   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17486-7305
	I1101 00:08:51.870580   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1101 00:08:51.870594   26955 main.go:141] libmachine: (multinode-600483-m02) Setting executable bit set on /home/jenkins/minikube-integration/17486-7305/.minikube/machines (perms=drwxr-xr-x)
	I1101 00:08:51.870612   26955 main.go:141] libmachine: (multinode-600483-m02) Setting executable bit set on /home/jenkins/minikube-integration/17486-7305/.minikube (perms=drwxr-xr-x)
	I1101 00:08:51.870627   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | Checking permissions on dir: /home/jenkins
	I1101 00:08:51.870641   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | Checking permissions on dir: /home
	I1101 00:08:51.870653   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | Skipping /home - not owner
	I1101 00:08:51.870677   26955 main.go:141] libmachine: (multinode-600483-m02) Setting executable bit set on /home/jenkins/minikube-integration/17486-7305 (perms=drwxrwxr-x)
	I1101 00:08:51.870699   26955 main.go:141] libmachine: (multinode-600483-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1101 00:08:51.870716   26955 main.go:141] libmachine: (multinode-600483-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1101 00:08:51.870729   26955 main.go:141] libmachine: (multinode-600483-m02) Creating domain...
	I1101 00:08:51.871562   26955 main.go:141] libmachine: (multinode-600483-m02) define libvirt domain using xml: 
	I1101 00:08:51.871587   26955 main.go:141] libmachine: (multinode-600483-m02) <domain type='kvm'>
	I1101 00:08:51.871600   26955 main.go:141] libmachine: (multinode-600483-m02)   <name>multinode-600483-m02</name>
	I1101 00:08:51.871610   26955 main.go:141] libmachine: (multinode-600483-m02)   <memory unit='MiB'>2200</memory>
	I1101 00:08:51.871621   26955 main.go:141] libmachine: (multinode-600483-m02)   <vcpu>2</vcpu>
	I1101 00:08:51.871633   26955 main.go:141] libmachine: (multinode-600483-m02)   <features>
	I1101 00:08:51.871644   26955 main.go:141] libmachine: (multinode-600483-m02)     <acpi/>
	I1101 00:08:51.871649   26955 main.go:141] libmachine: (multinode-600483-m02)     <apic/>
	I1101 00:08:51.871677   26955 main.go:141] libmachine: (multinode-600483-m02)     <pae/>
	I1101 00:08:51.871693   26955 main.go:141] libmachine: (multinode-600483-m02)     
	I1101 00:08:51.871704   26955 main.go:141] libmachine: (multinode-600483-m02)   </features>
	I1101 00:08:51.871729   26955 main.go:141] libmachine: (multinode-600483-m02)   <cpu mode='host-passthrough'>
	I1101 00:08:51.871743   26955 main.go:141] libmachine: (multinode-600483-m02)   
	I1101 00:08:51.871758   26955 main.go:141] libmachine: (multinode-600483-m02)   </cpu>
	I1101 00:08:51.871786   26955 main.go:141] libmachine: (multinode-600483-m02)   <os>
	I1101 00:08:51.871816   26955 main.go:141] libmachine: (multinode-600483-m02)     <type>hvm</type>
	I1101 00:08:51.871829   26955 main.go:141] libmachine: (multinode-600483-m02)     <boot dev='cdrom'/>
	I1101 00:08:51.871842   26955 main.go:141] libmachine: (multinode-600483-m02)     <boot dev='hd'/>
	I1101 00:08:51.871856   26955 main.go:141] libmachine: (multinode-600483-m02)     <bootmenu enable='no'/>
	I1101 00:08:51.871866   26955 main.go:141] libmachine: (multinode-600483-m02)   </os>
	I1101 00:08:51.871872   26955 main.go:141] libmachine: (multinode-600483-m02)   <devices>
	I1101 00:08:51.871885   26955 main.go:141] libmachine: (multinode-600483-m02)     <disk type='file' device='cdrom'>
	I1101 00:08:51.871903   26955 main.go:141] libmachine: (multinode-600483-m02)       <source file='/home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483-m02/boot2docker.iso'/>
	I1101 00:08:51.871922   26955 main.go:141] libmachine: (multinode-600483-m02)       <target dev='hdc' bus='scsi'/>
	I1101 00:08:51.871953   26955 main.go:141] libmachine: (multinode-600483-m02)       <readonly/>
	I1101 00:08:51.871967   26955 main.go:141] libmachine: (multinode-600483-m02)     </disk>
	I1101 00:08:51.871980   26955 main.go:141] libmachine: (multinode-600483-m02)     <disk type='file' device='disk'>
	I1101 00:08:51.872010   26955 main.go:141] libmachine: (multinode-600483-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1101 00:08:51.872033   26955 main.go:141] libmachine: (multinode-600483-m02)       <source file='/home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483-m02/multinode-600483-m02.rawdisk'/>
	I1101 00:08:51.872048   26955 main.go:141] libmachine: (multinode-600483-m02)       <target dev='hda' bus='virtio'/>
	I1101 00:08:51.872059   26955 main.go:141] libmachine: (multinode-600483-m02)     </disk>
	I1101 00:08:51.872073   26955 main.go:141] libmachine: (multinode-600483-m02)     <interface type='network'>
	I1101 00:08:51.872089   26955 main.go:141] libmachine: (multinode-600483-m02)       <source network='mk-multinode-600483'/>
	I1101 00:08:51.872100   26955 main.go:141] libmachine: (multinode-600483-m02)       <model type='virtio'/>
	I1101 00:08:51.872113   26955 main.go:141] libmachine: (multinode-600483-m02)     </interface>
	I1101 00:08:51.872127   26955 main.go:141] libmachine: (multinode-600483-m02)     <interface type='network'>
	I1101 00:08:51.872139   26955 main.go:141] libmachine: (multinode-600483-m02)       <source network='default'/>
	I1101 00:08:51.872151   26955 main.go:141] libmachine: (multinode-600483-m02)       <model type='virtio'/>
	I1101 00:08:51.872164   26955 main.go:141] libmachine: (multinode-600483-m02)     </interface>
	I1101 00:08:51.872181   26955 main.go:141] libmachine: (multinode-600483-m02)     <serial type='pty'>
	I1101 00:08:51.872193   26955 main.go:141] libmachine: (multinode-600483-m02)       <target port='0'/>
	I1101 00:08:51.872201   26955 main.go:141] libmachine: (multinode-600483-m02)     </serial>
	I1101 00:08:51.872212   26955 main.go:141] libmachine: (multinode-600483-m02)     <console type='pty'>
	I1101 00:08:51.872226   26955 main.go:141] libmachine: (multinode-600483-m02)       <target type='serial' port='0'/>
	I1101 00:08:51.872240   26955 main.go:141] libmachine: (multinode-600483-m02)     </console>
	I1101 00:08:51.872256   26955 main.go:141] libmachine: (multinode-600483-m02)     <rng model='virtio'>
	I1101 00:08:51.872271   26955 main.go:141] libmachine: (multinode-600483-m02)       <backend model='random'>/dev/random</backend>
	I1101 00:08:51.872283   26955 main.go:141] libmachine: (multinode-600483-m02)     </rng>
	I1101 00:08:51.872294   26955 main.go:141] libmachine: (multinode-600483-m02)     
	I1101 00:08:51.872299   26955 main.go:141] libmachine: (multinode-600483-m02)     
	I1101 00:08:51.872313   26955 main.go:141] libmachine: (multinode-600483-m02)   </devices>
	I1101 00:08:51.872329   26955 main.go:141] libmachine: (multinode-600483-m02) </domain>
	I1101 00:08:51.872352   26955 main.go:141] libmachine: (multinode-600483-m02) 
	I1101 00:08:51.879041   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:cc:c0:e9 in network default
	I1101 00:08:51.879651   26955 main.go:141] libmachine: (multinode-600483-m02) Ensuring networks are active...
	I1101 00:08:51.879668   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:08:51.880335   26955 main.go:141] libmachine: (multinode-600483-m02) Ensuring network default is active
	I1101 00:08:51.880697   26955 main.go:141] libmachine: (multinode-600483-m02) Ensuring network mk-multinode-600483 is active
	I1101 00:08:51.881058   26955 main.go:141] libmachine: (multinode-600483-m02) Getting domain xml...
	I1101 00:08:51.881777   26955 main.go:141] libmachine: (multinode-600483-m02) Creating domain...
	I1101 00:08:53.149440   26955 main.go:141] libmachine: (multinode-600483-m02) Waiting to get IP...
	I1101 00:08:53.150188   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:08:53.150658   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | unable to find current IP address of domain multinode-600483-m02 in network mk-multinode-600483
	I1101 00:08:53.150695   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | I1101 00:08:53.150632   27326 retry.go:31] will retry after 253.970722ms: waiting for machine to come up
	I1101 00:08:53.407346   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:08:53.407923   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | unable to find current IP address of domain multinode-600483-m02 in network mk-multinode-600483
	I1101 00:08:53.407962   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | I1101 00:08:53.407874   27326 retry.go:31] will retry after 238.105647ms: waiting for machine to come up
	I1101 00:08:53.647160   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:08:53.647664   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | unable to find current IP address of domain multinode-600483-m02 in network mk-multinode-600483
	I1101 00:08:53.647693   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | I1101 00:08:53.647610   27326 retry.go:31] will retry after 487.520175ms: waiting for machine to come up
	I1101 00:08:54.136681   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:08:54.137265   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | unable to find current IP address of domain multinode-600483-m02 in network mk-multinode-600483
	I1101 00:08:54.137301   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | I1101 00:08:54.137217   27326 retry.go:31] will retry after 540.183043ms: waiting for machine to come up
	I1101 00:08:54.678950   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:08:54.679506   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | unable to find current IP address of domain multinode-600483-m02 in network mk-multinode-600483
	I1101 00:08:54.679529   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | I1101 00:08:54.679461   27326 retry.go:31] will retry after 613.309424ms: waiting for machine to come up
	I1101 00:08:55.294423   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:08:55.294838   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | unable to find current IP address of domain multinode-600483-m02 in network mk-multinode-600483
	I1101 00:08:55.294865   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | I1101 00:08:55.294807   27326 retry.go:31] will retry after 776.885687ms: waiting for machine to come up
	I1101 00:08:56.073757   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:08:56.074240   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | unable to find current IP address of domain multinode-600483-m02 in network mk-multinode-600483
	I1101 00:08:56.074271   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | I1101 00:08:56.074188   27326 retry.go:31] will retry after 1.093249347s: waiting for machine to come up
	I1101 00:08:57.168831   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:08:57.169309   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | unable to find current IP address of domain multinode-600483-m02 in network mk-multinode-600483
	I1101 00:08:57.169338   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | I1101 00:08:57.169259   27326 retry.go:31] will retry after 1.209040289s: waiting for machine to come up
	I1101 00:08:58.380782   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:08:58.381198   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | unable to find current IP address of domain multinode-600483-m02 in network mk-multinode-600483
	I1101 00:08:58.381227   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | I1101 00:08:58.381158   27326 retry.go:31] will retry after 1.359003737s: waiting for machine to come up
	I1101 00:08:59.741927   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:08:59.742296   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | unable to find current IP address of domain multinode-600483-m02 in network mk-multinode-600483
	I1101 00:08:59.742325   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | I1101 00:08:59.742236   27326 retry.go:31] will retry after 1.420670445s: waiting for machine to come up
	I1101 00:09:01.165029   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:01.165393   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | unable to find current IP address of domain multinode-600483-m02 in network mk-multinode-600483
	I1101 00:09:01.165449   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | I1101 00:09:01.165335   27326 retry.go:31] will retry after 2.386332515s: waiting for machine to come up
	I1101 00:09:03.553194   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:03.553630   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | unable to find current IP address of domain multinode-600483-m02 in network mk-multinode-600483
	I1101 00:09:03.553651   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | I1101 00:09:03.553592   27326 retry.go:31] will retry after 3.582679528s: waiting for machine to come up
	I1101 00:09:07.137390   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:07.137848   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | unable to find current IP address of domain multinode-600483-m02 in network mk-multinode-600483
	I1101 00:09:07.137871   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | I1101 00:09:07.137811   27326 retry.go:31] will retry after 3.765644206s: waiting for machine to come up
	I1101 00:09:10.907727   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:10.908179   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | unable to find current IP address of domain multinode-600483-m02 in network mk-multinode-600483
	I1101 00:09:10.908202   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | I1101 00:09:10.908147   27326 retry.go:31] will retry after 4.29281752s: waiting for machine to come up
	I1101 00:09:15.204461   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:15.205039   26955 main.go:141] libmachine: (multinode-600483-m02) Found IP for machine: 192.168.39.109
	I1101 00:09:15.205064   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has current primary IP address 192.168.39.109 and MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:15.205075   26955 main.go:141] libmachine: (multinode-600483-m02) Reserving static IP address...
	I1101 00:09:15.205401   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | unable to find host DHCP lease matching {name: "multinode-600483-m02", mac: "52:54:00:07:cb:5d", ip: "192.168.39.109"} in network mk-multinode-600483
	I1101 00:09:15.284915   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | Getting to WaitForSSH function...
	I1101 00:09:15.284962   26955 main.go:141] libmachine: (multinode-600483-m02) Reserved static IP address: 192.168.39.109
	I1101 00:09:15.284977   26955 main.go:141] libmachine: (multinode-600483-m02) Waiting for SSH to be available...
	I1101 00:09:15.287724   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:15.288250   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cb:5d", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:09:06 +0000 UTC Type:0 Mac:52:54:00:07:cb:5d Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:minikube Clientid:01:52:54:00:07:cb:5d}
	I1101 00:09:15.288280   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:15.288382   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | Using SSH client type: external
	I1101 00:09:15.288416   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483-m02/id_rsa (-rw-------)
	I1101 00:09:15.288453   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.109 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1101 00:09:15.288474   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | About to run SSH command:
	I1101 00:09:15.288489   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | exit 0
	I1101 00:09:15.376372   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | SSH cmd err, output: <nil>: 
	I1101 00:09:15.376684   26955 main.go:141] libmachine: (multinode-600483-m02) KVM machine creation complete!
	I1101 00:09:15.377061   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetConfigRaw
	I1101 00:09:15.377766   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .DriverName
	I1101 00:09:15.378007   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .DriverName
	I1101 00:09:15.378206   26955 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1101 00:09:15.378229   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetState
	I1101 00:09:15.379642   26955 main.go:141] libmachine: Detecting operating system of created instance...
	I1101 00:09:15.379670   26955 main.go:141] libmachine: Waiting for SSH to be available...
	I1101 00:09:15.379679   26955 main.go:141] libmachine: Getting to WaitForSSH function...
	I1101 00:09:15.379703   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHHostname
	I1101 00:09:15.382324   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:15.382777   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cb:5d", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:09:06 +0000 UTC Type:0 Mac:52:54:00:07:cb:5d Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-600483-m02 Clientid:01:52:54:00:07:cb:5d}
	I1101 00:09:15.382812   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:15.382990   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHPort
	I1101 00:09:15.383214   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHKeyPath
	I1101 00:09:15.383376   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHKeyPath
	I1101 00:09:15.383540   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHUsername
	I1101 00:09:15.383702   26955 main.go:141] libmachine: Using SSH client type: native
	I1101 00:09:15.384096   26955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1101 00:09:15.384112   26955 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1101 00:09:15.491438   26955 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 00:09:15.491460   26955 main.go:141] libmachine: Detecting the provisioner...
	I1101 00:09:15.491468   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHHostname
	I1101 00:09:15.494978   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:15.495457   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cb:5d", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:09:06 +0000 UTC Type:0 Mac:52:54:00:07:cb:5d Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-600483-m02 Clientid:01:52:54:00:07:cb:5d}
	I1101 00:09:15.495500   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:15.495697   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHPort
	I1101 00:09:15.495924   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHKeyPath
	I1101 00:09:15.496119   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHKeyPath
	I1101 00:09:15.496269   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHUsername
	I1101 00:09:15.496460   26955 main.go:141] libmachine: Using SSH client type: native
	I1101 00:09:15.496765   26955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1101 00:09:15.496777   26955 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1101 00:09:15.608904   26955 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g0cee705-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1101 00:09:15.608988   26955 main.go:141] libmachine: found compatible host: buildroot
	I1101 00:09:15.608997   26955 main.go:141] libmachine: Provisioning with buildroot...
	I1101 00:09:15.609006   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetMachineName
	I1101 00:09:15.609255   26955 buildroot.go:166] provisioning hostname "multinode-600483-m02"
	I1101 00:09:15.609305   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetMachineName
	I1101 00:09:15.609494   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHHostname
	I1101 00:09:15.612378   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:15.612746   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cb:5d", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:09:06 +0000 UTC Type:0 Mac:52:54:00:07:cb:5d Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-600483-m02 Clientid:01:52:54:00:07:cb:5d}
	I1101 00:09:15.612780   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:15.612976   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHPort
	I1101 00:09:15.613183   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHKeyPath
	I1101 00:09:15.613392   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHKeyPath
	I1101 00:09:15.613532   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHUsername
	I1101 00:09:15.613712   26955 main.go:141] libmachine: Using SSH client type: native
	I1101 00:09:15.614018   26955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1101 00:09:15.614035   26955 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-600483-m02 && echo "multinode-600483-m02" | sudo tee /etc/hostname
	I1101 00:09:15.735391   26955 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-600483-m02
	
	I1101 00:09:15.735424   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHHostname
	I1101 00:09:15.737997   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:15.738376   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cb:5d", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:09:06 +0000 UTC Type:0 Mac:52:54:00:07:cb:5d Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-600483-m02 Clientid:01:52:54:00:07:cb:5d}
	I1101 00:09:15.738411   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:15.738566   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHPort
	I1101 00:09:15.738774   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHKeyPath
	I1101 00:09:15.738946   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHKeyPath
	I1101 00:09:15.739081   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHUsername
	I1101 00:09:15.739253   26955 main.go:141] libmachine: Using SSH client type: native
	I1101 00:09:15.739661   26955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1101 00:09:15.739689   26955 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-600483-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-600483-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-600483-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 00:09:15.860422   26955 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 00:09:15.860453   26955 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7305/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7305/.minikube}
	I1101 00:09:15.860477   26955 buildroot.go:174] setting up certificates
	I1101 00:09:15.860488   26955 provision.go:83] configureAuth start
	I1101 00:09:15.860501   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetMachineName
	I1101 00:09:15.860810   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetIP
	I1101 00:09:15.863629   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:15.864002   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cb:5d", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:09:06 +0000 UTC Type:0 Mac:52:54:00:07:cb:5d Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-600483-m02 Clientid:01:52:54:00:07:cb:5d}
	I1101 00:09:15.864039   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:15.864231   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHHostname
	I1101 00:09:15.866579   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:15.866936   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cb:5d", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:09:06 +0000 UTC Type:0 Mac:52:54:00:07:cb:5d Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-600483-m02 Clientid:01:52:54:00:07:cb:5d}
	I1101 00:09:15.866976   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:15.867052   26955 provision.go:138] copyHostCerts
	I1101 00:09:15.867126   26955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 00:09:15.867171   26955 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem, removing ...
	I1101 00:09:15.867185   26955 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 00:09:15.867264   26955 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem (1078 bytes)
	I1101 00:09:15.867357   26955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 00:09:15.867382   26955 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem, removing ...
	I1101 00:09:15.867391   26955 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 00:09:15.867427   26955 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem (1123 bytes)
	I1101 00:09:15.867488   26955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 00:09:15.867508   26955 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem, removing ...
	I1101 00:09:15.867517   26955 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 00:09:15.867558   26955 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem (1675 bytes)
	I1101 00:09:15.867620   26955 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem org=jenkins.multinode-600483-m02 san=[192.168.39.109 192.168.39.109 localhost 127.0.0.1 minikube multinode-600483-m02]
	I1101 00:09:16.304964   26955 provision.go:172] copyRemoteCerts
	I1101 00:09:16.305017   26955 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 00:09:16.305039   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHHostname
	I1101 00:09:16.307738   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:16.308133   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cb:5d", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:09:06 +0000 UTC Type:0 Mac:52:54:00:07:cb:5d Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-600483-m02 Clientid:01:52:54:00:07:cb:5d}
	I1101 00:09:16.308167   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:16.308291   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHPort
	I1101 00:09:16.308500   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHKeyPath
	I1101 00:09:16.308656   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHUsername
	I1101 00:09:16.308812   26955 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483-m02/id_rsa Username:docker}
	I1101 00:09:16.393689   26955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1101 00:09:16.393772   26955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 00:09:16.416685   26955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1101 00:09:16.416757   26955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1101 00:09:16.439794   26955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1101 00:09:16.439860   26955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 00:09:16.462683   26955 provision.go:86] duration metric: configureAuth took 602.182016ms
	I1101 00:09:16.462712   26955 buildroot.go:189] setting minikube options for container-runtime
	I1101 00:09:16.462913   26955 config.go:182] Loaded profile config "multinode-600483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:09:16.462993   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHHostname
	I1101 00:09:16.465645   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:16.466033   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cb:5d", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:09:06 +0000 UTC Type:0 Mac:52:54:00:07:cb:5d Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-600483-m02 Clientid:01:52:54:00:07:cb:5d}
	I1101 00:09:16.466073   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:16.466199   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHPort
	I1101 00:09:16.466429   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHKeyPath
	I1101 00:09:16.466582   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHKeyPath
	I1101 00:09:16.466733   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHUsername
	I1101 00:09:16.466869   26955 main.go:141] libmachine: Using SSH client type: native
	I1101 00:09:16.467245   26955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1101 00:09:16.467265   26955 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 00:09:16.768581   26955 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 00:09:16.768605   26955 main.go:141] libmachine: Checking connection to Docker...
	I1101 00:09:16.768614   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetURL
	I1101 00:09:16.770061   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | Using libvirt version 6000000
	I1101 00:09:16.772128   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:16.772523   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cb:5d", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:09:06 +0000 UTC Type:0 Mac:52:54:00:07:cb:5d Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-600483-m02 Clientid:01:52:54:00:07:cb:5d}
	I1101 00:09:16.772565   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:16.772724   26955 main.go:141] libmachine: Docker is up and running!
	I1101 00:09:16.772740   26955 main.go:141] libmachine: Reticulating splines...
	I1101 00:09:16.772748   26955 client.go:171] LocalClient.Create took 25.24314855s
	I1101 00:09:16.772773   26955 start.go:167] duration metric: libmachine.API.Create for "multinode-600483" took 25.243199109s
	I1101 00:09:16.772785   26955 start.go:300] post-start starting for "multinode-600483-m02" (driver="kvm2")
	I1101 00:09:16.772825   26955 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 00:09:16.772852   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .DriverName
	I1101 00:09:16.773097   26955 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 00:09:16.773129   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHHostname
	I1101 00:09:16.775311   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:16.775700   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cb:5d", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:09:06 +0000 UTC Type:0 Mac:52:54:00:07:cb:5d Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-600483-m02 Clientid:01:52:54:00:07:cb:5d}
	I1101 00:09:16.775747   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:16.775886   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHPort
	I1101 00:09:16.776078   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHKeyPath
	I1101 00:09:16.776242   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHUsername
	I1101 00:09:16.776424   26955 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483-m02/id_rsa Username:docker}
	I1101 00:09:16.861149   26955 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 00:09:16.865523   26955 command_runner.go:130] > NAME=Buildroot
	I1101 00:09:16.865551   26955 command_runner.go:130] > VERSION=2021.02.12-1-g0cee705-dirty
	I1101 00:09:16.865558   26955 command_runner.go:130] > ID=buildroot
	I1101 00:09:16.865566   26955 command_runner.go:130] > VERSION_ID=2021.02.12
	I1101 00:09:16.865576   26955 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1101 00:09:16.865610   26955 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 00:09:16.865623   26955 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/addons for local assets ...
	I1101 00:09:16.865697   26955 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/files for local assets ...
	I1101 00:09:16.865795   26955 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> 145042.pem in /etc/ssl/certs
	I1101 00:09:16.865809   26955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> /etc/ssl/certs/145042.pem
	I1101 00:09:16.865892   26955 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 00:09:16.874831   26955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /etc/ssl/certs/145042.pem (1708 bytes)
	I1101 00:09:16.899393   26955 start.go:303] post-start completed in 126.593469ms
	I1101 00:09:16.899449   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetConfigRaw
	I1101 00:09:16.900091   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetIP
	I1101 00:09:16.902919   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:16.903316   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cb:5d", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:09:06 +0000 UTC Type:0 Mac:52:54:00:07:cb:5d Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-600483-m02 Clientid:01:52:54:00:07:cb:5d}
	I1101 00:09:16.903358   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:16.903598   26955 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/config.json ...
	I1101 00:09:16.903807   26955 start.go:128] duration metric: createHost completed in 25.39154117s
	I1101 00:09:16.903833   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHHostname
	I1101 00:09:16.906218   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:16.906607   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cb:5d", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:09:06 +0000 UTC Type:0 Mac:52:54:00:07:cb:5d Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-600483-m02 Clientid:01:52:54:00:07:cb:5d}
	I1101 00:09:16.906640   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:16.906767   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHPort
	I1101 00:09:16.906973   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHKeyPath
	I1101 00:09:16.907164   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHKeyPath
	I1101 00:09:16.907328   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHUsername
	I1101 00:09:16.907515   26955 main.go:141] libmachine: Using SSH client type: native
	I1101 00:09:16.907854   26955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1101 00:09:16.907868   26955 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1101 00:09:17.017037   26955 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698797356.991467803
	
	I1101 00:09:17.017076   26955 fix.go:206] guest clock: 1698797356.991467803
	I1101 00:09:17.017085   26955 fix.go:219] Guest: 2023-11-01 00:09:16.991467803 +0000 UTC Remote: 2023-11-01 00:09:16.903819533 +0000 UTC m=+91.962003828 (delta=87.64827ms)
	I1101 00:09:17.017101   26955 fix.go:190] guest clock delta is within tolerance: 87.64827ms
	I1101 00:09:17.017105   26955 start.go:83] releasing machines lock for "multinode-600483-m02", held for 25.504917287s
	I1101 00:09:17.017125   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .DriverName
	I1101 00:09:17.017423   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetIP
	I1101 00:09:17.019904   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:17.020271   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cb:5d", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:09:06 +0000 UTC Type:0 Mac:52:54:00:07:cb:5d Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-600483-m02 Clientid:01:52:54:00:07:cb:5d}
	I1101 00:09:17.020314   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:17.022917   26955 out.go:177] * Found network options:
	I1101 00:09:17.024501   26955 out.go:177]   - NO_PROXY=192.168.39.130
	W1101 00:09:17.026132   26955 proxy.go:119] fail to check proxy env: Error ip not in block
	I1101 00:09:17.026164   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .DriverName
	I1101 00:09:17.026820   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .DriverName
	I1101 00:09:17.027028   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .DriverName
	I1101 00:09:17.027090   26955 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 00:09:17.027137   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHHostname
	W1101 00:09:17.027261   26955 proxy.go:119] fail to check proxy env: Error ip not in block
	I1101 00:09:17.027338   26955 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 00:09:17.027361   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHHostname
	I1101 00:09:17.030796   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:17.030825   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:17.030851   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cb:5d", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:09:06 +0000 UTC Type:0 Mac:52:54:00:07:cb:5d Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-600483-m02 Clientid:01:52:54:00:07:cb:5d}
	I1101 00:09:17.030871   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:17.031147   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHPort
	I1101 00:09:17.031230   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cb:5d", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:09:06 +0000 UTC Type:0 Mac:52:54:00:07:cb:5d Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-600483-m02 Clientid:01:52:54:00:07:cb:5d}
	I1101 00:09:17.031261   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:17.031403   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHKeyPath
	I1101 00:09:17.031413   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHPort
	I1101 00:09:17.031636   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHUsername
	I1101 00:09:17.031638   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHKeyPath
	I1101 00:09:17.031827   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHUsername
	I1101 00:09:17.031823   26955 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483-m02/id_rsa Username:docker}
	I1101 00:09:17.031986   26955 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483-m02/id_rsa Username:docker}
	I1101 00:09:17.145491   26955 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1101 00:09:17.285003   26955 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1101 00:09:17.291332   26955 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1101 00:09:17.291585   26955 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 00:09:17.291629   26955 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 00:09:17.307123   26955 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1101 00:09:17.307161   26955 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 00:09:17.307170   26955 start.go:472] detecting cgroup driver to use...
	I1101 00:09:17.307245   26955 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 00:09:17.323432   26955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 00:09:17.336954   26955 docker.go:204] disabling cri-docker service (if available) ...
	I1101 00:09:17.337012   26955 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 00:09:17.349919   26955 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 00:09:17.362898   26955 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 00:09:17.376578   26955 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1101 00:09:17.470096   26955 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 00:09:17.487139   26955 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1101 00:09:17.610362   26955 docker.go:220] disabling docker service ...
	I1101 00:09:17.610440   26955 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 00:09:17.624599   26955 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 00:09:17.637039   26955 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1101 00:09:17.637134   26955 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 00:09:17.748172   26955 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1101 00:09:17.748253   26955 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 00:09:17.855806   26955 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1101 00:09:17.855838   26955 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1101 00:09:17.855905   26955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 00:09:17.869550   26955 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 00:09:17.886613   26955 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1101 00:09:17.886986   26955 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 00:09:17.887051   26955 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:09:17.896743   26955 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 00:09:17.896811   26955 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:09:17.906076   26955 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:09:17.915211   26955 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:09:17.924476   26955 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 00:09:17.934262   26955 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 00:09:17.943090   26955 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 00:09:17.943137   26955 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 00:09:17.943194   26955 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 00:09:17.955737   26955 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 00:09:17.964018   26955 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:09:18.071485   26955 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 00:09:18.238041   26955 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 00:09:18.238113   26955 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 00:09:18.242757   26955 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1101 00:09:18.242783   26955 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1101 00:09:18.242791   26955 command_runner.go:130] > Device: 16h/22d	Inode: 709         Links: 1
	I1101 00:09:18.242802   26955 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1101 00:09:18.242814   26955 command_runner.go:130] > Access: 2023-11-01 00:09:18.200874177 +0000
	I1101 00:09:18.242872   26955 command_runner.go:130] > Modify: 2023-11-01 00:09:18.200874177 +0000
	I1101 00:09:18.242890   26955 command_runner.go:130] > Change: 2023-11-01 00:09:18.200874177 +0000
	I1101 00:09:18.242902   26955 command_runner.go:130] >  Birth: -
	I1101 00:09:18.242928   26955 start.go:540] Will wait 60s for crictl version
	I1101 00:09:18.242997   26955 ssh_runner.go:195] Run: which crictl
	I1101 00:09:18.246888   26955 command_runner.go:130] > /usr/bin/crictl
	I1101 00:09:18.246958   26955 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 00:09:18.285162   26955 command_runner.go:130] > Version:  0.1.0
	I1101 00:09:18.285184   26955 command_runner.go:130] > RuntimeName:  cri-o
	I1101 00:09:18.285189   26955 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1101 00:09:18.285194   26955 command_runner.go:130] > RuntimeApiVersion:  v1
	I1101 00:09:18.285213   26955 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1101 00:09:18.285274   26955 ssh_runner.go:195] Run: crio --version
	I1101 00:09:18.331380   26955 command_runner.go:130] > crio version 1.24.1
	I1101 00:09:18.331405   26955 command_runner.go:130] > Version:          1.24.1
	I1101 00:09:18.331416   26955 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1101 00:09:18.331423   26955 command_runner.go:130] > GitTreeState:     dirty
	I1101 00:09:18.331432   26955 command_runner.go:130] > BuildDate:        2023-10-31T22:57:11Z
	I1101 00:09:18.331440   26955 command_runner.go:130] > GoVersion:        go1.19.9
	I1101 00:09:18.331447   26955 command_runner.go:130] > Compiler:         gc
	I1101 00:09:18.331462   26955 command_runner.go:130] > Platform:         linux/amd64
	I1101 00:09:18.331470   26955 command_runner.go:130] > Linkmode:         dynamic
	I1101 00:09:18.331485   26955 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1101 00:09:18.331496   26955 command_runner.go:130] > SeccompEnabled:   true
	I1101 00:09:18.331503   26955 command_runner.go:130] > AppArmorEnabled:  false
	I1101 00:09:18.331595   26955 ssh_runner.go:195] Run: crio --version
	I1101 00:09:18.381510   26955 command_runner.go:130] > crio version 1.24.1
	I1101 00:09:18.381534   26955 command_runner.go:130] > Version:          1.24.1
	I1101 00:09:18.381544   26955 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1101 00:09:18.381551   26955 command_runner.go:130] > GitTreeState:     dirty
	I1101 00:09:18.381579   26955 command_runner.go:130] > BuildDate:        2023-10-31T22:57:11Z
	I1101 00:09:18.381586   26955 command_runner.go:130] > GoVersion:        go1.19.9
	I1101 00:09:18.381591   26955 command_runner.go:130] > Compiler:         gc
	I1101 00:09:18.381601   26955 command_runner.go:130] > Platform:         linux/amd64
	I1101 00:09:18.381632   26955 command_runner.go:130] > Linkmode:         dynamic
	I1101 00:09:18.381652   26955 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1101 00:09:18.381659   26955 command_runner.go:130] > SeccompEnabled:   true
	I1101 00:09:18.381666   26955 command_runner.go:130] > AppArmorEnabled:  false
	I1101 00:09:18.385175   26955 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1101 00:09:18.386700   26955 out.go:177]   - env NO_PROXY=192.168.39.130
	I1101 00:09:18.388045   26955 main.go:141] libmachine: (multinode-600483-m02) Calling .GetIP
	I1101 00:09:18.390610   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:18.390936   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cb:5d", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:09:06 +0000 UTC Type:0 Mac:52:54:00:07:cb:5d Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-600483-m02 Clientid:01:52:54:00:07:cb:5d}
	I1101 00:09:18.390977   26955 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:09:18.391251   26955 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1101 00:09:18.395317   26955 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 00:09:18.406737   26955 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483 for IP: 192.168.39.109
	I1101 00:09:18.406774   26955 certs.go:190] acquiring lock for shared ca certs: {Name:mk6185b3938c4b51e80f57b7c81926c81b632b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:09:18.406913   26955 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key
	I1101 00:09:18.406970   26955 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key
	I1101 00:09:18.406988   26955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1101 00:09:18.407007   26955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1101 00:09:18.407026   26955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1101 00:09:18.407047   26955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1101 00:09:18.407106   26955 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem (1338 bytes)
	W1101 00:09:18.407145   26955 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504_empty.pem, impossibly tiny 0 bytes
	I1101 00:09:18.407161   26955 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 00:09:18.407200   26955 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem (1078 bytes)
	I1101 00:09:18.407233   26955 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem (1123 bytes)
	I1101 00:09:18.407265   26955 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem (1675 bytes)
	I1101 00:09:18.407322   26955 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem (1708 bytes)
	I1101 00:09:18.407362   26955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem -> /usr/share/ca-certificates/14504.pem
	I1101 00:09:18.407382   26955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> /usr/share/ca-certificates/145042.pem
	I1101 00:09:18.407401   26955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:09:18.407727   26955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 00:09:18.429521   26955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 00:09:18.450393   26955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 00:09:18.471445   26955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 00:09:18.493305   26955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem --> /usr/share/ca-certificates/14504.pem (1338 bytes)
	I1101 00:09:18.516785   26955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /usr/share/ca-certificates/145042.pem (1708 bytes)
	I1101 00:09:18.538415   26955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 00:09:18.562338   26955 ssh_runner.go:195] Run: openssl version
	I1101 00:09:18.568005   26955 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1101 00:09:18.568126   26955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14504.pem && ln -fs /usr/share/ca-certificates/14504.pem /etc/ssl/certs/14504.pem"
	I1101 00:09:18.578704   26955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14504.pem
	I1101 00:09:18.583107   26955 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 31 23:54 /usr/share/ca-certificates/14504.pem
	I1101 00:09:18.583160   26955 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:54 /usr/share/ca-certificates/14504.pem
	I1101 00:09:18.583216   26955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14504.pem
	I1101 00:09:18.589051   26955 command_runner.go:130] > 51391683
	I1101 00:09:18.589372   26955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14504.pem /etc/ssl/certs/51391683.0"
	I1101 00:09:18.599897   26955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145042.pem && ln -fs /usr/share/ca-certificates/145042.pem /etc/ssl/certs/145042.pem"
	I1101 00:09:18.610745   26955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145042.pem
	I1101 00:09:18.615430   26955 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 31 23:54 /usr/share/ca-certificates/145042.pem
	I1101 00:09:18.615459   26955 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:54 /usr/share/ca-certificates/145042.pem
	I1101 00:09:18.615498   26955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145042.pem
	I1101 00:09:18.620678   26955 command_runner.go:130] > 3ec20f2e
	I1101 00:09:18.620821   26955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145042.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 00:09:18.630989   26955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 00:09:18.641601   26955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:09:18.646133   26955 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:09:18.646528   26955 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:09:18.646599   26955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:09:18.651852   26955 command_runner.go:130] > b5213941
	I1101 00:09:18.652203   26955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 00:09:18.663784   26955 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 00:09:18.668264   26955 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1101 00:09:18.668304   26955 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1101 00:09:18.668400   26955 ssh_runner.go:195] Run: crio config
	I1101 00:09:18.729046   26955 command_runner.go:130] ! time="2023-11-01 00:09:18.705981829Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1101 00:09:18.729071   26955 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1101 00:09:18.742270   26955 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1101 00:09:18.742298   26955 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1101 00:09:18.742305   26955 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1101 00:09:18.742309   26955 command_runner.go:130] > #
	I1101 00:09:18.742316   26955 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1101 00:09:18.742322   26955 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1101 00:09:18.742331   26955 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1101 00:09:18.742349   26955 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1101 00:09:18.742360   26955 command_runner.go:130] > # reload'.
	I1101 00:09:18.742376   26955 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1101 00:09:18.742390   26955 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1101 00:09:18.742402   26955 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1101 00:09:18.742410   26955 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1101 00:09:18.742417   26955 command_runner.go:130] > [crio]
	I1101 00:09:18.742426   26955 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1101 00:09:18.742434   26955 command_runner.go:130] > # containers images, in this directory.
	I1101 00:09:18.742464   26955 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1101 00:09:18.742485   26955 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1101 00:09:18.742496   26955 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1101 00:09:18.742502   26955 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1101 00:09:18.742508   26955 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1101 00:09:18.742513   26955 command_runner.go:130] > storage_driver = "overlay"
	I1101 00:09:18.742521   26955 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1101 00:09:18.742531   26955 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1101 00:09:18.742538   26955 command_runner.go:130] > storage_option = [
	I1101 00:09:18.742546   26955 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1101 00:09:18.742552   26955 command_runner.go:130] > ]
	I1101 00:09:18.742563   26955 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1101 00:09:18.742574   26955 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1101 00:09:18.742581   26955 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1101 00:09:18.742591   26955 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1101 00:09:18.742608   26955 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1101 00:09:18.742620   26955 command_runner.go:130] > # always happen on a node reboot
	I1101 00:09:18.742629   26955 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1101 00:09:18.742642   26955 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1101 00:09:18.742649   26955 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1101 00:09:18.742673   26955 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1101 00:09:18.742686   26955 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1101 00:09:18.742699   26955 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1101 00:09:18.742717   26955 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1101 00:09:18.742727   26955 command_runner.go:130] > # internal_wipe = true
	I1101 00:09:18.742737   26955 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1101 00:09:18.742750   26955 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1101 00:09:18.742757   26955 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1101 00:09:18.742766   26955 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1101 00:09:18.742776   26955 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1101 00:09:18.742786   26955 command_runner.go:130] > [crio.api]
	I1101 00:09:18.742795   26955 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1101 00:09:18.742807   26955 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1101 00:09:18.742823   26955 command_runner.go:130] > # IP address on which the stream server will listen.
	I1101 00:09:18.742834   26955 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1101 00:09:18.742846   26955 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1101 00:09:18.742857   26955 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1101 00:09:18.742862   26955 command_runner.go:130] > # stream_port = "0"
	I1101 00:09:18.742873   26955 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1101 00:09:18.742884   26955 command_runner.go:130] > # stream_enable_tls = false
	I1101 00:09:18.742895   26955 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1101 00:09:18.742906   26955 command_runner.go:130] > # stream_idle_timeout = ""
	I1101 00:09:18.742918   26955 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1101 00:09:18.742931   26955 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1101 00:09:18.742941   26955 command_runner.go:130] > # minutes.
	I1101 00:09:18.742948   26955 command_runner.go:130] > # stream_tls_cert = ""
	I1101 00:09:18.742959   26955 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1101 00:09:18.742968   26955 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1101 00:09:18.742979   26955 command_runner.go:130] > # stream_tls_key = ""
	I1101 00:09:18.742989   26955 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1101 00:09:18.743004   26955 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1101 00:09:18.743019   26955 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1101 00:09:18.743029   26955 command_runner.go:130] > # stream_tls_ca = ""
	I1101 00:09:18.743041   26955 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1101 00:09:18.743049   26955 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1101 00:09:18.743061   26955 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1101 00:09:18.743073   26955 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1101 00:09:18.743111   26955 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1101 00:09:18.743125   26955 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1101 00:09:18.743132   26955 command_runner.go:130] > [crio.runtime]
	I1101 00:09:18.743141   26955 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1101 00:09:18.743149   26955 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1101 00:09:18.743156   26955 command_runner.go:130] > # "nofile=1024:2048"
	I1101 00:09:18.743170   26955 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1101 00:09:18.743181   26955 command_runner.go:130] > # default_ulimits = [
	I1101 00:09:18.743187   26955 command_runner.go:130] > # ]
	I1101 00:09:18.743201   26955 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1101 00:09:18.743208   26955 command_runner.go:130] > # no_pivot = false
	I1101 00:09:18.743224   26955 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1101 00:09:18.743240   26955 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1101 00:09:18.743251   26955 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1101 00:09:18.743264   26955 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1101 00:09:18.743273   26955 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1101 00:09:18.743288   26955 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1101 00:09:18.743300   26955 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1101 00:09:18.743307   26955 command_runner.go:130] > # Cgroup setting for conmon
	I1101 00:09:18.743322   26955 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1101 00:09:18.743330   26955 command_runner.go:130] > conmon_cgroup = "pod"
	I1101 00:09:18.743336   26955 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1101 00:09:18.743348   26955 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1101 00:09:18.743364   26955 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1101 00:09:18.743371   26955 command_runner.go:130] > conmon_env = [
	I1101 00:09:18.743385   26955 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1101 00:09:18.743391   26955 command_runner.go:130] > ]
	I1101 00:09:18.743400   26955 command_runner.go:130] > # Additional environment variables to set for all the
	I1101 00:09:18.743409   26955 command_runner.go:130] > # containers. These are overridden if set in the
	I1101 00:09:18.743419   26955 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1101 00:09:18.743436   26955 command_runner.go:130] > # default_env = [
	I1101 00:09:18.743440   26955 command_runner.go:130] > # ]
	I1101 00:09:18.743456   26955 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1101 00:09:18.743463   26955 command_runner.go:130] > # selinux = false
	I1101 00:09:18.743474   26955 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1101 00:09:18.743484   26955 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1101 00:09:18.743494   26955 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1101 00:09:18.743502   26955 command_runner.go:130] > # seccomp_profile = ""
	I1101 00:09:18.743512   26955 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1101 00:09:18.743522   26955 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1101 00:09:18.743534   26955 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1101 00:09:18.743546   26955 command_runner.go:130] > # which might increase security.
	I1101 00:09:18.743553   26955 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1101 00:09:18.743562   26955 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1101 00:09:18.743573   26955 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1101 00:09:18.743587   26955 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1101 00:09:18.743599   26955 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1101 00:09:18.743611   26955 command_runner.go:130] > # This option supports live configuration reload.
	I1101 00:09:18.743623   26955 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1101 00:09:18.743637   26955 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1101 00:09:18.743647   26955 command_runner.go:130] > # the cgroup blockio controller.
	I1101 00:09:18.743654   26955 command_runner.go:130] > # blockio_config_file = ""
	I1101 00:09:18.743664   26955 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1101 00:09:18.743672   26955 command_runner.go:130] > # irqbalance daemon.
	I1101 00:09:18.743685   26955 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1101 00:09:18.743697   26955 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1101 00:09:18.743709   26955 command_runner.go:130] > # This option supports live configuration reload.
	I1101 00:09:18.743716   26955 command_runner.go:130] > # rdt_config_file = ""
	I1101 00:09:18.743729   26955 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1101 00:09:18.743737   26955 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1101 00:09:18.743754   26955 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1101 00:09:18.743761   26955 command_runner.go:130] > # separate_pull_cgroup = ""
	I1101 00:09:18.743772   26955 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1101 00:09:18.743790   26955 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1101 00:09:18.743801   26955 command_runner.go:130] > # will be added.
	I1101 00:09:18.743809   26955 command_runner.go:130] > # default_capabilities = [
	I1101 00:09:18.743821   26955 command_runner.go:130] > # 	"CHOWN",
	I1101 00:09:18.743831   26955 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1101 00:09:18.743838   26955 command_runner.go:130] > # 	"FSETID",
	I1101 00:09:18.743849   26955 command_runner.go:130] > # 	"FOWNER",
	I1101 00:09:18.743856   26955 command_runner.go:130] > # 	"SETGID",
	I1101 00:09:18.743861   26955 command_runner.go:130] > # 	"SETUID",
	I1101 00:09:18.743866   26955 command_runner.go:130] > # 	"SETPCAP",
	I1101 00:09:18.743877   26955 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1101 00:09:18.743884   26955 command_runner.go:130] > # 	"KILL",
	I1101 00:09:18.743891   26955 command_runner.go:130] > # ]
	I1101 00:09:18.743903   26955 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1101 00:09:18.743916   26955 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1101 00:09:18.743926   26955 command_runner.go:130] > # default_sysctls = [
	I1101 00:09:18.743947   26955 command_runner.go:130] > # ]
	I1101 00:09:18.743960   26955 command_runner.go:130] > # List of devices on the host that a
	I1101 00:09:18.743971   26955 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1101 00:09:18.743982   26955 command_runner.go:130] > # allowed_devices = [
	I1101 00:09:18.743989   26955 command_runner.go:130] > # 	"/dev/fuse",
	I1101 00:09:18.744009   26955 command_runner.go:130] > # ]
	I1101 00:09:18.744022   26955 command_runner.go:130] > # List of additional devices. specified as
	I1101 00:09:18.744032   26955 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1101 00:09:18.744048   26955 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1101 00:09:18.744112   26955 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1101 00:09:18.744129   26955 command_runner.go:130] > # additional_devices = [
	I1101 00:09:18.744132   26955 command_runner.go:130] > # ]
	I1101 00:09:18.744137   26955 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1101 00:09:18.744143   26955 command_runner.go:130] > # cdi_spec_dirs = [
	I1101 00:09:18.744149   26955 command_runner.go:130] > # 	"/etc/cdi",
	I1101 00:09:18.744156   26955 command_runner.go:130] > # 	"/var/run/cdi",
	I1101 00:09:18.744163   26955 command_runner.go:130] > # ]
	I1101 00:09:18.744175   26955 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1101 00:09:18.744189   26955 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1101 00:09:18.744199   26955 command_runner.go:130] > # Defaults to false.
	I1101 00:09:18.744207   26955 command_runner.go:130] > # device_ownership_from_security_context = false
	I1101 00:09:18.744220   26955 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1101 00:09:18.744230   26955 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1101 00:09:18.744239   26955 command_runner.go:130] > # hooks_dir = [
	I1101 00:09:18.744251   26955 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1101 00:09:18.744257   26955 command_runner.go:130] > # ]
	I1101 00:09:18.744271   26955 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1101 00:09:18.744286   26955 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1101 00:09:18.744298   26955 command_runner.go:130] > # its default mounts from the following two files:
	I1101 00:09:18.744307   26955 command_runner.go:130] > #
	I1101 00:09:18.744317   26955 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1101 00:09:18.744331   26955 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1101 00:09:18.744344   26955 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1101 00:09:18.744354   26955 command_runner.go:130] > #
	I1101 00:09:18.744364   26955 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1101 00:09:18.744379   26955 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1101 00:09:18.744394   26955 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1101 00:09:18.744405   26955 command_runner.go:130] > #      only add mounts it finds in this file.
	I1101 00:09:18.744414   26955 command_runner.go:130] > #
	I1101 00:09:18.744422   26955 command_runner.go:130] > # default_mounts_file = ""
	I1101 00:09:18.744430   26955 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1101 00:09:18.744451   26955 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1101 00:09:18.744462   26955 command_runner.go:130] > pids_limit = 1024
	I1101 00:09:18.744473   26955 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1101 00:09:18.744487   26955 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1101 00:09:18.744500   26955 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1101 00:09:18.744517   26955 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1101 00:09:18.744527   26955 command_runner.go:130] > # log_size_max = -1
	I1101 00:09:18.744535   26955 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1101 00:09:18.744544   26955 command_runner.go:130] > # log_to_journald = false
	I1101 00:09:18.744555   26955 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1101 00:09:18.744567   26955 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1101 00:09:18.744577   26955 command_runner.go:130] > # Path to directory for container attach sockets.
	I1101 00:09:18.744589   26955 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1101 00:09:18.744598   26955 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1101 00:09:18.744608   26955 command_runner.go:130] > # bind_mount_prefix = ""
	I1101 00:09:18.744618   26955 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1101 00:09:18.744628   26955 command_runner.go:130] > # read_only = false
	I1101 00:09:18.744637   26955 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1101 00:09:18.744653   26955 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1101 00:09:18.744664   26955 command_runner.go:130] > # live configuration reload.
	I1101 00:09:18.744671   26955 command_runner.go:130] > # log_level = "info"
	I1101 00:09:18.744681   26955 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1101 00:09:18.744694   26955 command_runner.go:130] > # This option supports live configuration reload.
	I1101 00:09:18.744704   26955 command_runner.go:130] > # log_filter = ""
	I1101 00:09:18.744714   26955 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1101 00:09:18.744728   26955 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1101 00:09:18.744738   26955 command_runner.go:130] > # separated by comma.
	I1101 00:09:18.744744   26955 command_runner.go:130] > # uid_mappings = ""
	I1101 00:09:18.744752   26955 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1101 00:09:18.744766   26955 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1101 00:09:18.744776   26955 command_runner.go:130] > # separated by comma.
	I1101 00:09:18.744784   26955 command_runner.go:130] > # gid_mappings = ""
	I1101 00:09:18.744798   26955 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1101 00:09:18.744812   26955 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1101 00:09:18.744825   26955 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1101 00:09:18.744835   26955 command_runner.go:130] > # minimum_mappable_uid = -1
	I1101 00:09:18.744850   26955 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1101 00:09:18.744862   26955 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1101 00:09:18.744876   26955 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1101 00:09:18.744887   26955 command_runner.go:130] > # minimum_mappable_gid = -1
	I1101 00:09:18.744899   26955 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1101 00:09:18.744913   26955 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1101 00:09:18.744926   26955 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1101 00:09:18.744934   26955 command_runner.go:130] > # ctr_stop_timeout = 30
	I1101 00:09:18.744942   26955 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1101 00:09:18.744951   26955 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1101 00:09:18.744960   26955 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1101 00:09:18.744972   26955 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1101 00:09:18.744982   26955 command_runner.go:130] > drop_infra_ctr = false
	I1101 00:09:18.745003   26955 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1101 00:09:18.745016   26955 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1101 00:09:18.745031   26955 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1101 00:09:18.745038   26955 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1101 00:09:18.745044   26955 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1101 00:09:18.745053   26955 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1101 00:09:18.745061   26955 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1101 00:09:18.745073   26955 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1101 00:09:18.745080   26955 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1101 00:09:18.745091   26955 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1101 00:09:18.745105   26955 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1101 00:09:18.745119   26955 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1101 00:09:18.745130   26955 command_runner.go:130] > # default_runtime = "runc"
	I1101 00:09:18.745147   26955 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1101 00:09:18.745161   26955 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1101 00:09:18.745176   26955 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1101 00:09:18.745188   26955 command_runner.go:130] > # creation as a file is not desired either.
	I1101 00:09:18.745202   26955 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1101 00:09:18.745215   26955 command_runner.go:130] > # the hostname is being managed dynamically.
	I1101 00:09:18.745226   26955 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1101 00:09:18.745235   26955 command_runner.go:130] > # ]
	I1101 00:09:18.745258   26955 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1101 00:09:18.745271   26955 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1101 00:09:18.745291   26955 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1101 00:09:18.745305   26955 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1101 00:09:18.745314   26955 command_runner.go:130] > #
	I1101 00:09:18.745323   26955 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1101 00:09:18.745334   26955 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1101 00:09:18.745343   26955 command_runner.go:130] > #  runtime_type = "oci"
	I1101 00:09:18.745349   26955 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1101 00:09:18.745359   26955 command_runner.go:130] > #  privileged_without_host_devices = false
	I1101 00:09:18.745367   26955 command_runner.go:130] > #  allowed_annotations = []
	I1101 00:09:18.745377   26955 command_runner.go:130] > # Where:
	I1101 00:09:18.745386   26955 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1101 00:09:18.745401   26955 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1101 00:09:18.745415   26955 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1101 00:09:18.745428   26955 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1101 00:09:18.745438   26955 command_runner.go:130] > #   in $PATH.
	I1101 00:09:18.745451   26955 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1101 00:09:18.745460   26955 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1101 00:09:18.745471   26955 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1101 00:09:18.745484   26955 command_runner.go:130] > #   state.
	I1101 00:09:18.745496   26955 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1101 00:09:18.745509   26955 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1101 00:09:18.745523   26955 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1101 00:09:18.745536   26955 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1101 00:09:18.745552   26955 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1101 00:09:18.745565   26955 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1101 00:09:18.745574   26955 command_runner.go:130] > #   The currently recognized values are:
	I1101 00:09:18.745589   26955 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1101 00:09:18.745605   26955 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1101 00:09:18.745619   26955 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1101 00:09:18.745633   26955 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1101 00:09:18.745648   26955 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1101 00:09:18.745658   26955 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1101 00:09:18.745668   26955 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1101 00:09:18.745683   26955 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1101 00:09:18.745695   26955 command_runner.go:130] > #   should be moved to the container's cgroup
	I1101 00:09:18.745705   26955 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1101 00:09:18.745721   26955 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1101 00:09:18.745731   26955 command_runner.go:130] > runtime_type = "oci"
	I1101 00:09:18.745742   26955 command_runner.go:130] > runtime_root = "/run/runc"
	I1101 00:09:18.745749   26955 command_runner.go:130] > runtime_config_path = ""
	I1101 00:09:18.745754   26955 command_runner.go:130] > monitor_path = ""
	I1101 00:09:18.745763   26955 command_runner.go:130] > monitor_cgroup = ""
	I1101 00:09:18.745775   26955 command_runner.go:130] > monitor_exec_cgroup = ""
	I1101 00:09:18.745787   26955 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1101 00:09:18.745797   26955 command_runner.go:130] > # running containers
	I1101 00:09:18.745805   26955 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1101 00:09:18.745819   26955 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1101 00:09:18.746125   26955 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1101 00:09:18.746146   26955 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1101 00:09:18.746164   26955 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1101 00:09:18.746175   26955 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1101 00:09:18.746186   26955 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1101 00:09:18.746194   26955 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1101 00:09:18.746213   26955 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1101 00:09:18.746233   26955 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1101 00:09:18.746249   26955 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1101 00:09:18.746261   26955 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1101 00:09:18.746279   26955 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1101 00:09:18.746297   26955 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1101 00:09:18.746318   26955 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1101 00:09:18.746333   26955 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1101 00:09:18.746353   26955 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1101 00:09:18.746366   26955 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1101 00:09:18.746386   26955 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1101 00:09:18.746403   26955 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1101 00:09:18.746409   26955 command_runner.go:130] > # Example:
	I1101 00:09:18.746423   26955 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1101 00:09:18.746438   26955 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1101 00:09:18.746454   26955 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1101 00:09:18.746471   26955 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1101 00:09:18.746488   26955 command_runner.go:130] > # cpuset = 0
	I1101 00:09:18.746500   26955 command_runner.go:130] > # cpushares = "0-1"
	I1101 00:09:18.746514   26955 command_runner.go:130] > # Where:
	I1101 00:09:18.746528   26955 command_runner.go:130] > # The workload name is workload-type.
	I1101 00:09:18.746545   26955 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1101 00:09:18.746555   26955 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1101 00:09:18.746565   26955 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1101 00:09:18.746583   26955 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1101 00:09:18.746601   26955 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1101 00:09:18.746615   26955 command_runner.go:130] > # 
	I1101 00:09:18.746632   26955 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1101 00:09:18.746643   26955 command_runner.go:130] > #
	I1101 00:09:18.746657   26955 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1101 00:09:18.746667   26955 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1101 00:09:18.746683   26955 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1101 00:09:18.746694   26955 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1101 00:09:18.746715   26955 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1101 00:09:18.746727   26955 command_runner.go:130] > [crio.image]
	I1101 00:09:18.746737   26955 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1101 00:09:18.746750   26955 command_runner.go:130] > # default_transport = "docker://"
	I1101 00:09:18.746768   26955 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1101 00:09:18.746784   26955 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1101 00:09:18.746791   26955 command_runner.go:130] > # global_auth_file = ""
	I1101 00:09:18.746805   26955 command_runner.go:130] > # The image used to instantiate infra containers.
	I1101 00:09:18.746818   26955 command_runner.go:130] > # This option supports live configuration reload.
	I1101 00:09:18.746827   26955 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1101 00:09:18.746851   26955 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1101 00:09:18.746868   26955 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1101 00:09:18.746877   26955 command_runner.go:130] > # This option supports live configuration reload.
	I1101 00:09:18.746885   26955 command_runner.go:130] > # pause_image_auth_file = ""
	I1101 00:09:18.746902   26955 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1101 00:09:18.746913   26955 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1101 00:09:18.746929   26955 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1101 00:09:18.746950   26955 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1101 00:09:18.746962   26955 command_runner.go:130] > # pause_command = "/pause"
	I1101 00:09:18.746974   26955 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1101 00:09:18.746994   26955 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1101 00:09:18.747005   26955 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1101 00:09:18.747030   26955 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1101 00:09:18.747044   26955 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1101 00:09:18.747051   26955 command_runner.go:130] > # signature_policy = ""
	I1101 00:09:18.747061   26955 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1101 00:09:18.747076   26955 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1101 00:09:18.747088   26955 command_runner.go:130] > # changing them here.
	I1101 00:09:18.747095   26955 command_runner.go:130] > # insecure_registries = [
	I1101 00:09:18.747101   26955 command_runner.go:130] > # ]
	I1101 00:09:18.747120   26955 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1101 00:09:18.747134   26955 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1101 00:09:18.747140   26955 command_runner.go:130] > # image_volumes = "mkdir"
	I1101 00:09:18.747149   26955 command_runner.go:130] > # Temporary directory to use for storing big files
	I1101 00:09:18.747162   26955 command_runner.go:130] > # big_files_temporary_dir = ""
	I1101 00:09:18.747177   26955 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1101 00:09:18.747183   26955 command_runner.go:130] > # CNI plugins.
	I1101 00:09:18.747189   26955 command_runner.go:130] > [crio.network]
	I1101 00:09:18.747204   26955 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1101 00:09:18.747221   26955 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1101 00:09:18.747232   26955 command_runner.go:130] > # cni_default_network = ""
	I1101 00:09:18.747258   26955 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1101 00:09:18.747271   26955 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1101 00:09:18.747281   26955 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1101 00:09:18.747299   26955 command_runner.go:130] > # plugin_dirs = [
	I1101 00:09:18.747312   26955 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1101 00:09:18.747322   26955 command_runner.go:130] > # ]
	I1101 00:09:18.747342   26955 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1101 00:09:18.747349   26955 command_runner.go:130] > [crio.metrics]
	I1101 00:09:18.747357   26955 command_runner.go:130] > # Globally enable or disable metrics support.
	I1101 00:09:18.747364   26955 command_runner.go:130] > enable_metrics = true
	I1101 00:09:18.747377   26955 command_runner.go:130] > # Specify enabled metrics collectors.
	I1101 00:09:18.747393   26955 command_runner.go:130] > # Per default all metrics are enabled.
	I1101 00:09:18.747416   26955 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1101 00:09:18.747432   26955 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1101 00:09:18.747442   26955 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1101 00:09:18.747454   26955 command_runner.go:130] > # metrics_collectors = [
	I1101 00:09:18.747460   26955 command_runner.go:130] > # 	"operations",
	I1101 00:09:18.747479   26955 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1101 00:09:18.747487   26955 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1101 00:09:18.747494   26955 command_runner.go:130] > # 	"operations_errors",
	I1101 00:09:18.747502   26955 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1101 00:09:18.747515   26955 command_runner.go:130] > # 	"image_pulls_by_name",
	I1101 00:09:18.747534   26955 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1101 00:09:18.747546   26955 command_runner.go:130] > # 	"image_pulls_failures",
	I1101 00:09:18.747557   26955 command_runner.go:130] > # 	"image_pulls_successes",
	I1101 00:09:18.747579   26955 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1101 00:09:18.747621   26955 command_runner.go:130] > # 	"image_layer_reuse",
	I1101 00:09:18.747660   26955 command_runner.go:130] > # 	"containers_oom_total",
	I1101 00:09:18.747669   26955 command_runner.go:130] > # 	"containers_oom",
	I1101 00:09:18.747676   26955 command_runner.go:130] > # 	"processes_defunct",
	I1101 00:09:18.747687   26955 command_runner.go:130] > # 	"operations_total",
	I1101 00:09:18.747718   26955 command_runner.go:130] > # 	"operations_latency_seconds",
	I1101 00:09:18.747740   26955 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1101 00:09:18.748095   26955 command_runner.go:130] > # 	"operations_errors_total",
	I1101 00:09:18.748126   26955 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1101 00:09:18.748135   26955 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1101 00:09:18.748144   26955 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1101 00:09:18.748151   26955 command_runner.go:130] > # 	"image_pulls_success_total",
	I1101 00:09:18.748160   26955 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1101 00:09:18.748171   26955 command_runner.go:130] > # 	"containers_oom_count_total",
	I1101 00:09:18.748179   26955 command_runner.go:130] > # ]
	I1101 00:09:18.748187   26955 command_runner.go:130] > # The port on which the metrics server will listen.
	I1101 00:09:18.748206   26955 command_runner.go:130] > # metrics_port = 9090
	I1101 00:09:18.748219   26955 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1101 00:09:18.748230   26955 command_runner.go:130] > # metrics_socket = ""
	I1101 00:09:18.748242   26955 command_runner.go:130] > # The certificate for the secure metrics server.
	I1101 00:09:18.748254   26955 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1101 00:09:18.748268   26955 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1101 00:09:18.748280   26955 command_runner.go:130] > # certificate on any modification event.
	I1101 00:09:18.748291   26955 command_runner.go:130] > # metrics_cert = ""
	I1101 00:09:18.748304   26955 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1101 00:09:18.748316   26955 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1101 00:09:18.748326   26955 command_runner.go:130] > # metrics_key = ""
	I1101 00:09:18.748337   26955 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1101 00:09:18.748347   26955 command_runner.go:130] > [crio.tracing]
	I1101 00:09:18.748357   26955 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1101 00:09:18.748364   26955 command_runner.go:130] > # enable_tracing = false
	I1101 00:09:18.748374   26955 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1101 00:09:18.748386   26955 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1101 00:09:18.748398   26955 command_runner.go:130] > # Number of samples to collect per million spans.
	I1101 00:09:18.748411   26955 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1101 00:09:18.748425   26955 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1101 00:09:18.748435   26955 command_runner.go:130] > [crio.stats]
	I1101 00:09:18.748445   26955 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1101 00:09:18.748452   26955 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1101 00:09:18.748463   26955 command_runner.go:130] > # stats_collection_period = 0
	I1101 00:09:18.748570   26955 cni.go:84] Creating CNI manager for ""
	I1101 00:09:18.748585   26955 cni.go:136] 2 nodes found, recommending kindnet
	I1101 00:09:18.748596   26955 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 00:09:18.748624   26955 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.109 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-600483 NodeName:multinode-600483-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.130"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.109 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 00:09:18.748735   26955 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.109
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-600483-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.109
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.130"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 00:09:18.748777   26955 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-600483-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.109
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-600483 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 00:09:18.748829   26955 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 00:09:18.758491   26955 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.3': No such file or directory
	I1101 00:09:18.758543   26955 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.3': No such file or directory
	
	Initiating transfer...
	I1101 00:09:18.758602   26955 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.3
	I1101 00:09:18.767719   26955 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl.sha256
	I1101 00:09:18.767745   26955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/linux/amd64/v1.28.3/kubectl -> /var/lib/minikube/binaries/v1.28.3/kubectl
	I1101 00:09:18.767813   26955 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubectl
	I1101 00:09:18.767899   26955 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17486-7305/.minikube/cache/linux/amd64/v1.28.3/kubeadm
	I1101 00:09:18.767923   26955 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17486-7305/.minikube/cache/linux/amd64/v1.28.3/kubelet
	I1101 00:09:18.772264   26955 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubectl': No such file or directory
	I1101 00:09:18.772471   26955 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubectl': No such file or directory
	I1101 00:09:18.772504   26955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/cache/linux/amd64/v1.28.3/kubectl --> /var/lib/minikube/binaries/v1.28.3/kubectl (49872896 bytes)
	I1101 00:09:19.733835   26955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/linux/amd64/v1.28.3/kubeadm -> /var/lib/minikube/binaries/v1.28.3/kubeadm
	I1101 00:09:19.733903   26955 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubeadm
	I1101 00:09:19.739176   26955 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubeadm': No such file or directory
	I1101 00:09:19.739219   26955 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubeadm': No such file or directory
	I1101 00:09:19.739242   26955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/cache/linux/amd64/v1.28.3/kubeadm --> /var/lib/minikube/binaries/v1.28.3/kubeadm (49045504 bytes)
	I1101 00:09:20.153366   26955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 00:09:20.167805   26955 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/linux/amd64/v1.28.3/kubelet -> /var/lib/minikube/binaries/v1.28.3/kubelet
	I1101 00:09:20.167899   26955 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubelet
	I1101 00:09:20.172597   26955 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubelet': No such file or directory
	I1101 00:09:20.172639   26955 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubelet': No such file or directory
	I1101 00:09:20.172673   26955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/cache/linux/amd64/v1.28.3/kubelet --> /var/lib/minikube/binaries/v1.28.3/kubelet (110780416 bytes)
	I1101 00:09:20.668446   26955 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1101 00:09:20.677090   26955 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1101 00:09:20.692678   26955 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 00:09:20.707953   26955 ssh_runner.go:195] Run: grep 192.168.39.130	control-plane.minikube.internal$ /etc/hosts
	I1101 00:09:20.711334   26955 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.130	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 00:09:20.722576   26955 host.go:66] Checking if "multinode-600483" exists ...
	I1101 00:09:20.722813   26955 config.go:182] Loaded profile config "multinode-600483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:09:20.722916   26955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1101 00:09:20.722953   26955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:09:20.737855   26955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39651
	I1101 00:09:20.738279   26955 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:09:20.738714   26955 main.go:141] libmachine: Using API Version  1
	I1101 00:09:20.738735   26955 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:09:20.739054   26955 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:09:20.739391   26955 main.go:141] libmachine: (multinode-600483) Calling .DriverName
	I1101 00:09:20.739569   26955 start.go:304] JoinCluster: &{Name:multinode-600483 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.3 ClusterName:multinode-600483 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.130 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.109 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:09:20.739653   26955 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1101 00:09:20.739669   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHHostname
	I1101 00:09:20.742731   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:09:20.743251   26955 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:09:20.743296   26955 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:09:20.743393   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHPort
	I1101 00:09:20.743552   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:09:20.743680   26955 main.go:141] libmachine: (multinode-600483) Calling .GetSSHUsername
	I1101 00:09:20.743824   26955 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483/id_rsa Username:docker}
	I1101 00:09:20.909536   26955 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token it1vsj.wp9karzc9j5uxpep --discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 
	I1101 00:09:20.912331   26955 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.109 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1101 00:09:20.912376   26955 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token it1vsj.wp9karzc9j5uxpep --discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-600483-m02"
	I1101 00:09:20.961215   26955 command_runner.go:130] > [preflight] Running pre-flight checks
	I1101 00:09:21.103860   26955 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1101 00:09:21.103890   26955 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1101 00:09:21.146433   26955 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 00:09:21.146454   26955 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 00:09:21.146460   26955 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1101 00:09:21.262257   26955 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1101 00:09:23.780830   26955 command_runner.go:130] > This node has joined the cluster:
	I1101 00:09:23.780860   26955 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1101 00:09:23.780871   26955 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1101 00:09:23.780891   26955 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1101 00:09:23.782323   26955 command_runner.go:130] ! W1101 00:09:20.940834     815 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1101 00:09:23.782353   26955 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 00:09:23.782380   26955 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token it1vsj.wp9karzc9j5uxpep --discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-600483-m02": (2.86998818s)
	I1101 00:09:23.782404   26955 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1101 00:09:24.065875   26955 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I1101 00:09:24.065912   26955 start.go:306] JoinCluster complete in 3.326344567s
	I1101 00:09:24.065924   26955 cni.go:84] Creating CNI manager for ""
	I1101 00:09:24.065931   26955 cni.go:136] 2 nodes found, recommending kindnet
	I1101 00:09:24.065988   26955 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 00:09:24.071392   26955 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1101 00:09:24.071418   26955 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1101 00:09:24.071447   26955 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1101 00:09:24.071462   26955 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1101 00:09:24.071473   26955 command_runner.go:130] > Access: 2023-11-01 00:07:57.714642713 +0000
	I1101 00:09:24.071485   26955 command_runner.go:130] > Modify: 2023-10-31 23:04:20.000000000 +0000
	I1101 00:09:24.071494   26955 command_runner.go:130] > Change: 2023-11-01 00:07:55.940642713 +0000
	I1101 00:09:24.071504   26955 command_runner.go:130] >  Birth: -
	I1101 00:09:24.071554   26955 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1101 00:09:24.071568   26955 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1101 00:09:24.101307   26955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 00:09:24.410507   26955 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1101 00:09:24.420804   26955 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1101 00:09:24.426345   26955 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1101 00:09:24.445259   26955 command_runner.go:130] > daemonset.apps/kindnet configured
	I1101 00:09:24.448589   26955 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 00:09:24.448883   26955 kapi.go:59] client config for multinode-600483: &rest.Config{Host:"https://192.168.39.130:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/client.crt", KeyFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/client.key", CAFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 00:09:24.449248   26955 round_trippers.go:463] GET https://192.168.39.130:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1101 00:09:24.449268   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:24.449279   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:24.449290   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:24.452197   26955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:24.452220   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:24.452231   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:24.452244   26955 round_trippers.go:580]     Content-Length: 291
	I1101 00:09:24.452254   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:24 GMT
	I1101 00:09:24.452264   26955 round_trippers.go:580]     Audit-Id: aba7e6d7-134e-41bc-ba59-1f83149be22b
	I1101 00:09:24.452275   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:24.452285   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:24.452296   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:24.452320   26955 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"21004493-8bb6-43e9-8ba2-65d98d570b24","resourceVersion":"406","creationTimestamp":"2023-11-01T00:08:30Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1101 00:09:24.452416   26955 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-600483" context rescaled to 1 replicas
	I1101 00:09:24.452448   26955 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.109 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1101 00:09:24.455605   26955 out.go:177] * Verifying Kubernetes components...
	I1101 00:09:24.457110   26955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 00:09:24.470909   26955 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 00:09:24.471235   26955 kapi.go:59] client config for multinode-600483: &rest.Config{Host:"https://192.168.39.130:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/client.crt", KeyFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/client.key", CAFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 00:09:24.471608   26955 node_ready.go:35] waiting up to 6m0s for node "multinode-600483-m02" to be "Ready" ...
	I1101 00:09:24.471691   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m02
	I1101 00:09:24.471702   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:24.471713   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:24.471726   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:24.475056   26955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:24.475080   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:24.475098   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:24.475106   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:24.475115   26955 round_trippers.go:580]     Content-Length: 3531
	I1101 00:09:24.475130   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:24 GMT
	I1101 00:09:24.475138   26955 round_trippers.go:580]     Audit-Id: f9c11281-afb6-45e5-923e-08e3adb66ae0
	I1101 00:09:24.475145   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:24.475152   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:24.475392   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483-m02","uid":"5b2b1f13-2a35-43d5-86a5-bb5c1d6395e1","resourceVersion":"457","creationTimestamp":"2023-11-01T00:09:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2507 chars]
	I1101 00:09:24.475766   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m02
	I1101 00:09:24.475783   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:24.475794   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:24.475807   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:24.478401   26955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:24.478423   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:24.478433   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:24.478443   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:24.478452   26955 round_trippers.go:580]     Content-Length: 3531
	I1101 00:09:24.478460   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:24 GMT
	I1101 00:09:24.478477   26955 round_trippers.go:580]     Audit-Id: 6bd2fab6-9502-4da6-b7af-77092a8f6c25
	I1101 00:09:24.478486   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:24.478495   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:24.478566   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483-m02","uid":"5b2b1f13-2a35-43d5-86a5-bb5c1d6395e1","resourceVersion":"457","creationTimestamp":"2023-11-01T00:09:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2507 chars]
	I1101 00:09:24.979153   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m02
	I1101 00:09:24.979173   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:24.979183   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:24.979192   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:24.982281   26955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:24.982302   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:24.982309   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:24.982315   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:24.982320   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:24.982330   26955 round_trippers.go:580]     Content-Length: 3531
	I1101 00:09:24.982335   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:24 GMT
	I1101 00:09:24.982341   26955 round_trippers.go:580]     Audit-Id: 90cd1b84-4cf7-4361-a4ac-a2db2056206e
	I1101 00:09:24.982346   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:24.982434   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483-m02","uid":"5b2b1f13-2a35-43d5-86a5-bb5c1d6395e1","resourceVersion":"457","creationTimestamp":"2023-11-01T00:09:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2507 chars]
	I1101 00:09:25.479775   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m02
	I1101 00:09:25.479806   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:25.479822   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:25.479831   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:25.483090   26955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:25.483113   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:25.483123   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:25.483129   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:25.483134   26955 round_trippers.go:580]     Content-Length: 3531
	I1101 00:09:25.483139   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:25 GMT
	I1101 00:09:25.483144   26955 round_trippers.go:580]     Audit-Id: 5a295e46-201b-40dd-a803-fa4085e895ce
	I1101 00:09:25.483149   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:25.483154   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:25.483191   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483-m02","uid":"5b2b1f13-2a35-43d5-86a5-bb5c1d6395e1","resourceVersion":"457","creationTimestamp":"2023-11-01T00:09:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2507 chars]
	I1101 00:09:25.979436   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m02
	I1101 00:09:25.979475   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:25.979483   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:25.979489   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:25.982472   26955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:25.982491   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:25.982502   26955 round_trippers.go:580]     Content-Length: 3531
	I1101 00:09:25.982507   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:25 GMT
	I1101 00:09:25.982513   26955 round_trippers.go:580]     Audit-Id: a36f9d60-1bd5-416a-a63d-e7623936f89c
	I1101 00:09:25.982518   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:25.982523   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:25.982529   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:25.982534   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:25.982602   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483-m02","uid":"5b2b1f13-2a35-43d5-86a5-bb5c1d6395e1","resourceVersion":"457","creationTimestamp":"2023-11-01T00:09:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2507 chars]
	I1101 00:09:26.479105   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m02
	I1101 00:09:26.479128   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:26.479136   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:26.479146   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:26.482407   26955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:26.482433   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:26.482443   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:26 GMT
	I1101 00:09:26.482451   26955 round_trippers.go:580]     Audit-Id: 8e1a8fd2-46d6-4dba-812e-4cbd32251869
	I1101 00:09:26.482464   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:26.482477   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:26.482489   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:26.482502   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:26.482515   26955 round_trippers.go:580]     Content-Length: 3531
	I1101 00:09:26.482618   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483-m02","uid":"5b2b1f13-2a35-43d5-86a5-bb5c1d6395e1","resourceVersion":"457","creationTimestamp":"2023-11-01T00:09:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2507 chars]
	I1101 00:09:26.482909   26955 node_ready.go:58] node "multinode-600483-m02" has status "Ready":"False"
	I1101 00:09:26.979603   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m02
	I1101 00:09:26.979625   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:26.979637   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:26.979645   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:27.003979   26955 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I1101 00:09:27.004014   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:27.004023   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:27.004029   26955 round_trippers.go:580]     Content-Length: 3640
	I1101 00:09:27.004036   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:26 GMT
	I1101 00:09:27.004042   26955 round_trippers.go:580]     Audit-Id: e3825a25-f901-43a1-91aa-8462bd00776f
	I1101 00:09:27.004049   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:27.004058   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:27.004067   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:27.004408   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483-m02","uid":"5b2b1f13-2a35-43d5-86a5-bb5c1d6395e1","resourceVersion":"463","creationTimestamp":"2023-11-01T00:09:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1101 00:09:27.479791   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m02
	I1101 00:09:27.479820   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:27.479832   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:27.479844   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:27.482706   26955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:27.482728   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:27.482735   26955 round_trippers.go:580]     Content-Length: 3640
	I1101 00:09:27.482743   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:27 GMT
	I1101 00:09:27.482750   26955 round_trippers.go:580]     Audit-Id: 21b7da0a-81db-4d65-ab40-8b9280eebc18
	I1101 00:09:27.482759   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:27.482767   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:27.482776   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:27.482790   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:27.482879   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483-m02","uid":"5b2b1f13-2a35-43d5-86a5-bb5c1d6395e1","resourceVersion":"463","creationTimestamp":"2023-11-01T00:09:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1101 00:09:27.978996   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m02
	I1101 00:09:27.979021   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:27.979029   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:27.979036   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:27.982451   26955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:27.982482   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:27.982493   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:27.982500   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:27.982507   26955 round_trippers.go:580]     Content-Length: 3640
	I1101 00:09:27.982515   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:27 GMT
	I1101 00:09:27.982525   26955 round_trippers.go:580]     Audit-Id: d74264af-b51e-4d15-b59a-359f2145c2a9
	I1101 00:09:27.982536   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:27.982544   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:27.982638   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483-m02","uid":"5b2b1f13-2a35-43d5-86a5-bb5c1d6395e1","resourceVersion":"463","creationTimestamp":"2023-11-01T00:09:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1101 00:09:28.479793   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m02
	I1101 00:09:28.479820   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:28.479832   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:28.479841   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:28.482626   26955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:28.482658   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:28.482670   26955 round_trippers.go:580]     Content-Length: 3640
	I1101 00:09:28.482680   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:28 GMT
	I1101 00:09:28.482687   26955 round_trippers.go:580]     Audit-Id: 42a91e87-f293-4d1a-9199-a91ac4dca68b
	I1101 00:09:28.482787   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:28.482819   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:28.482829   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:28.482838   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:28.482920   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483-m02","uid":"5b2b1f13-2a35-43d5-86a5-bb5c1d6395e1","resourceVersion":"463","creationTimestamp":"2023-11-01T00:09:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1101 00:09:28.483307   26955 node_ready.go:58] node "multinode-600483-m02" has status "Ready":"False"
	I1101 00:09:28.979290   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m02
	I1101 00:09:28.979331   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:28.979340   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:28.979346   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:28.982472   26955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:28.982503   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:28.982510   26955 round_trippers.go:580]     Audit-Id: e07b05a5-a346-4f4c-9452-9d3694acb86d
	I1101 00:09:28.982516   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:28.982521   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:28.982526   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:28.982531   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:28.982536   26955 round_trippers.go:580]     Content-Length: 3640
	I1101 00:09:28.982541   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:28 GMT
	I1101 00:09:28.982620   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483-m02","uid":"5b2b1f13-2a35-43d5-86a5-bb5c1d6395e1","resourceVersion":"463","creationTimestamp":"2023-11-01T00:09:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1101 00:09:29.479213   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m02
	I1101 00:09:29.479243   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:29.479255   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:29.479264   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:29.482213   26955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:29.482235   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:29.482242   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:29.482248   26955 round_trippers.go:580]     Content-Length: 3640
	I1101 00:09:29.482253   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:29 GMT
	I1101 00:09:29.482258   26955 round_trippers.go:580]     Audit-Id: 559cda81-6fb6-4487-91ec-f717f6eb7a4f
	I1101 00:09:29.482263   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:29.482268   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:29.482273   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:29.482348   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483-m02","uid":"5b2b1f13-2a35-43d5-86a5-bb5c1d6395e1","resourceVersion":"463","creationTimestamp":"2023-11-01T00:09:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1101 00:09:29.980005   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m02
	I1101 00:09:29.980038   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:29.980050   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:29.980059   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:29.983253   26955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:29.983275   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:29.983286   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:29.983297   26955 round_trippers.go:580]     Content-Length: 3640
	I1101 00:09:29.983307   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:29 GMT
	I1101 00:09:29.983320   26955 round_trippers.go:580]     Audit-Id: f04f66b0-36c6-41a8-9063-c316baab6d9f
	I1101 00:09:29.983332   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:29.983345   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:29.983355   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:29.983445   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483-m02","uid":"5b2b1f13-2a35-43d5-86a5-bb5c1d6395e1","resourceVersion":"463","creationTimestamp":"2023-11-01T00:09:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1101 00:09:30.479769   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m02
	I1101 00:09:30.479791   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:30.479799   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:30.479805   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:30.483331   26955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:30.483360   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:30.483374   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:30.483382   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:30.483389   26955 round_trippers.go:580]     Content-Length: 3640
	I1101 00:09:30.483395   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:30 GMT
	I1101 00:09:30.483403   26955 round_trippers.go:580]     Audit-Id: 9bb62372-7b06-4e3f-a7c9-788391aaa884
	I1101 00:09:30.483412   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:30.483421   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:30.483520   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483-m02","uid":"5b2b1f13-2a35-43d5-86a5-bb5c1d6395e1","resourceVersion":"463","creationTimestamp":"2023-11-01T00:09:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1101 00:09:30.483797   26955 node_ready.go:58] node "multinode-600483-m02" has status "Ready":"False"
	I1101 00:09:30.979909   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m02
	I1101 00:09:30.979943   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:30.979955   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:30.979963   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:30.983281   26955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:30.983339   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:30.983357   26955 round_trippers.go:580]     Audit-Id: 91ca8afd-0730-47a4-ac0b-050c4ccdad41
	I1101 00:09:30.983367   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:30.983384   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:30.983392   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:30.983402   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:30.983411   26955 round_trippers.go:580]     Content-Length: 3640
	I1101 00:09:30.983423   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:30 GMT
	I1101 00:09:30.983523   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483-m02","uid":"5b2b1f13-2a35-43d5-86a5-bb5c1d6395e1","resourceVersion":"463","creationTimestamp":"2023-11-01T00:09:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1101 00:09:31.479022   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m02
	I1101 00:09:31.479045   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:31.479065   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:31.479075   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:31.481972   26955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:31.482016   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:31.482027   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:31.482035   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:31.482056   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:31.482069   26955 round_trippers.go:580]     Content-Length: 3640
	I1101 00:09:31.482082   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:31 GMT
	I1101 00:09:31.482091   26955 round_trippers.go:580]     Audit-Id: ca8de31a-1e9f-437a-8761-7a2a3f1bb8aa
	I1101 00:09:31.482106   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:31.482156   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483-m02","uid":"5b2b1f13-2a35-43d5-86a5-bb5c1d6395e1","resourceVersion":"463","creationTimestamp":"2023-11-01T00:09:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1101 00:09:31.979315   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m02
	I1101 00:09:31.979341   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:31.979350   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:31.979361   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:32.028231   26955 round_trippers.go:574] Response Status: 200 OK in 48 milliseconds
	I1101 00:09:32.028252   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:32.028259   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:32.028267   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:32.028274   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:32.028282   26955 round_trippers.go:580]     Content-Length: 3640
	I1101 00:09:32.028291   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:32 GMT
	I1101 00:09:32.028298   26955 round_trippers.go:580]     Audit-Id: 1008374a-b7c2-47b5-9c0b-f2597d74c30a
	I1101 00:09:32.028306   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:32.028410   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483-m02","uid":"5b2b1f13-2a35-43d5-86a5-bb5c1d6395e1","resourceVersion":"463","creationTimestamp":"2023-11-01T00:09:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1101 00:09:32.478990   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m02
	I1101 00:09:32.479012   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:32.479020   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:32.479028   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:32.482433   26955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:32.482457   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:32.482464   26955 round_trippers.go:580]     Content-Length: 3640
	I1101 00:09:32.482470   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:32 GMT
	I1101 00:09:32.482475   26955 round_trippers.go:580]     Audit-Id: 4bcee082-7e5f-4d1d-8abf-110d07bea1e9
	I1101 00:09:32.482480   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:32.482485   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:32.482491   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:32.482495   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:32.482589   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483-m02","uid":"5b2b1f13-2a35-43d5-86a5-bb5c1d6395e1","resourceVersion":"463","creationTimestamp":"2023-11-01T00:09:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1101 00:09:32.979112   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m02
	I1101 00:09:32.979134   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:32.979142   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:32.979148   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:32.981970   26955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:32.981990   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:32.981997   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:32.982002   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:32.982008   26955 round_trippers.go:580]     Content-Length: 3640
	I1101 00:09:32.982013   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:32 GMT
	I1101 00:09:32.982017   26955 round_trippers.go:580]     Audit-Id: f0b98291-faba-4781-b33f-0788f396c155
	I1101 00:09:32.982024   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:32.982032   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:32.982114   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483-m02","uid":"5b2b1f13-2a35-43d5-86a5-bb5c1d6395e1","resourceVersion":"463","creationTimestamp":"2023-11-01T00:09:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1101 00:09:32.982395   26955 node_ready.go:58] node "multinode-600483-m02" has status "Ready":"False"
	I1101 00:09:33.479665   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m02
	I1101 00:09:33.479698   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:33.479706   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:33.479712   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:33.483234   26955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:33.483257   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:33.483264   26955 round_trippers.go:580]     Audit-Id: 59cd87ee-fc6d-4451-8824-2883f939252a
	I1101 00:09:33.483269   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:33.483274   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:33.483280   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:33.483285   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:33.483291   26955 round_trippers.go:580]     Content-Length: 3640
	I1101 00:09:33.483296   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:33 GMT
	I1101 00:09:33.483408   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483-m02","uid":"5b2b1f13-2a35-43d5-86a5-bb5c1d6395e1","resourceVersion":"463","creationTimestamp":"2023-11-01T00:09:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1101 00:09:33.978987   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m02
	I1101 00:09:33.979017   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:33.979028   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:33.979053   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:33.982420   26955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:33.982448   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:33.982458   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:33.982464   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:33.982469   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:33.982477   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:33.982483   26955 round_trippers.go:580]     Content-Length: 3726
	I1101 00:09:33.982488   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:33 GMT
	I1101 00:09:33.982497   26955 round_trippers.go:580]     Audit-Id: 1f0b1fa5-e872-46af-9e62-efe4fd81e85f
	I1101 00:09:33.982589   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483-m02","uid":"5b2b1f13-2a35-43d5-86a5-bb5c1d6395e1","resourceVersion":"487","creationTimestamp":"2023-11-01T00:09:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2702 chars]
	I1101 00:09:33.982898   26955 node_ready.go:49] node "multinode-600483-m02" has status "Ready":"True"
	I1101 00:09:33.982919   26955 node_ready.go:38] duration metric: took 9.51128932s waiting for node "multinode-600483-m02" to be "Ready" ...
	I1101 00:09:33.982932   26955 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 00:09:33.983012   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods
	I1101 00:09:33.983024   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:33.983035   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:33.983050   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:33.987199   26955 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1101 00:09:33.987218   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:33.987224   26955 round_trippers.go:580]     Audit-Id: 351f98ad-5466-4e83-b7a9-5b8edb811a6d
	I1101 00:09:33.987230   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:33.987235   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:33.987240   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:33.987245   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:33.987250   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:33 GMT
	I1101 00:09:33.988677   26955 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"487"},"items":[{"metadata":{"name":"coredns-5dd5756b68-rpvvn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d8ab0ebb-aa1f-4143-b987-6c1ae065954a","resourceVersion":"401","creationTimestamp":"2023-11-01T00:08:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15779dee-f1e7-4836-aba2-2d57728c2309","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15779dee-f1e7-4836-aba2-2d57728c2309\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67370 chars]
	I1101 00:09:33.990646   26955 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rpvvn" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:33.990727   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rpvvn
	I1101 00:09:33.990743   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:33.990752   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:33.990762   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:33.993496   26955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:33.993516   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:33.993526   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:33.993534   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:33.993548   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:33 GMT
	I1101 00:09:33.993557   26955 round_trippers.go:580]     Audit-Id: 5334bf03-0792-4ec5-b252-e702332389c8
	I1101 00:09:33.993565   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:33.993578   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:33.993957   26955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rpvvn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d8ab0ebb-aa1f-4143-b987-6c1ae065954a","resourceVersion":"401","creationTimestamp":"2023-11-01T00:08:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15779dee-f1e7-4836-aba2-2d57728c2309","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15779dee-f1e7-4836-aba2-2d57728c2309\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1101 00:09:33.994401   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:09:33.994415   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:33.994422   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:33.994427   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:33.996418   26955 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1101 00:09:33.996437   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:33.996446   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:33 GMT
	I1101 00:09:33.996455   26955 round_trippers.go:580]     Audit-Id: b8c7c3cc-a501-49ab-ac79-d679c50b493d
	I1101 00:09:33.996463   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:33.996472   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:33.996481   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:33.996490   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:33.996704   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"382","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 5906 chars]
	I1101 00:09:33.997000   26955 pod_ready.go:92] pod "coredns-5dd5756b68-rpvvn" in "kube-system" namespace has status "Ready":"True"
	I1101 00:09:33.997015   26955 pod_ready.go:81] duration metric: took 6.347304ms waiting for pod "coredns-5dd5756b68-rpvvn" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:33.997023   26955 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:33.997084   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-600483
	I1101 00:09:33.997094   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:33.997101   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:33.997109   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:33.999225   26955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:33.999239   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:33.999246   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:33 GMT
	I1101 00:09:33.999252   26955 round_trippers.go:580]     Audit-Id: 8dc52078-b787-4bbd-9663-37600e9597b1
	I1101 00:09:33.999257   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:33.999262   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:33.999267   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:33.999272   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:33.999414   26955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-600483","namespace":"kube-system","uid":"c612ebac-fa1d-474a-b8cd-5e922a5f76dd","resourceVersion":"264","creationTimestamp":"2023-11-01T00:08:30Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.130:2379","kubernetes.io/config.hash":"5629fb0a0414e85632f97c416152ffbb","kubernetes.io/config.mirror":"5629fb0a0414e85632f97c416152ffbb","kubernetes.io/config.seen":"2023-11-01T00:08:30.293496672Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1101 00:09:33.999771   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:09:33.999784   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:33.999791   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:33.999797   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:34.001858   26955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:34.001879   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:34.001888   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:34.001896   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:34.001905   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:34.001920   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:33 GMT
	I1101 00:09:34.001927   26955 round_trippers.go:580]     Audit-Id: 2dab3d8f-be10-4874-b251-e279a1c7fd43
	I1101 00:09:34.001938   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:34.002062   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"382","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 5906 chars]
	I1101 00:09:34.002355   26955 pod_ready.go:92] pod "etcd-multinode-600483" in "kube-system" namespace has status "Ready":"True"
	I1101 00:09:34.002369   26955 pod_ready.go:81] duration metric: took 5.339166ms waiting for pod "etcd-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:34.002381   26955 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:34.002426   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-600483
	I1101 00:09:34.002433   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:34.002440   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:34.002449   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:34.004445   26955 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1101 00:09:34.004459   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:34.004466   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:34 GMT
	I1101 00:09:34.004471   26955 round_trippers.go:580]     Audit-Id: 57efee7a-7758-4756-9a8c-99efc80fcc0f
	I1101 00:09:34.004479   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:34.004491   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:34.004500   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:34.004509   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:34.004752   26955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-600483","namespace":"kube-system","uid":"bd94a63a-62c2-4654-aaf0-2e9df086b168","resourceVersion":"266","creationTimestamp":"2023-11-01T00:08:30Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.130:8443","kubernetes.io/config.hash":"99a9cda13526c350638742a7c7b2ba52","kubernetes.io/config.mirror":"99a9cda13526c350638742a7c7b2ba52","kubernetes.io/config.seen":"2023-11-01T00:08:30.293497612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1101 00:09:34.005233   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:09:34.005247   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:34.005257   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:34.005266   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:34.007009   26955 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1101 00:09:34.007021   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:34.007027   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:34.007036   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:34.007044   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:34 GMT
	I1101 00:09:34.007052   26955 round_trippers.go:580]     Audit-Id: 7f37f7fb-977b-4277-9329-fcdabf2ef06f
	I1101 00:09:34.007066   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:34.007074   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:34.007186   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"382","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 5906 chars]
	I1101 00:09:34.007436   26955 pod_ready.go:92] pod "kube-apiserver-multinode-600483" in "kube-system" namespace has status "Ready":"True"
	I1101 00:09:34.007450   26955 pod_ready.go:81] duration metric: took 5.059817ms waiting for pod "kube-apiserver-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:34.007458   26955 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:34.007510   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-600483
	I1101 00:09:34.007521   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:34.007531   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:34.007537   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:34.009373   26955 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1101 00:09:34.009389   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:34.009395   26955 round_trippers.go:580]     Audit-Id: d37089ff-c4d4-4181-b130-e4f75c2708ac
	I1101 00:09:34.009400   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:34.009405   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:34.009410   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:34.009415   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:34.009424   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:34 GMT
	I1101 00:09:34.009702   26955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-600483","namespace":"kube-system","uid":"9dd41877-c6ea-4591-90e1-632a234ffcf6","resourceVersion":"289","creationTimestamp":"2023-11-01T00:08:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f2b1fcba8b34b1f65e600fae0bd4374a","kubernetes.io/config.mirror":"f2b1fcba8b34b1f65e600fae0bd4374a","kubernetes.io/config.seen":"2023-11-01T00:08:20.448799328Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1101 00:09:34.010045   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:09:34.010056   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:34.010063   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:34.010068   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:34.011857   26955 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1101 00:09:34.011873   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:34.011880   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:34.011885   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:34.011890   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:34.011895   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:34 GMT
	I1101 00:09:34.011900   26955 round_trippers.go:580]     Audit-Id: 52c170ed-f359-45af-9513-5c47a194baf4
	I1101 00:09:34.011912   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:34.012168   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"382","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 5906 chars]
	I1101 00:09:34.012421   26955 pod_ready.go:92] pod "kube-controller-manager-multinode-600483" in "kube-system" namespace has status "Ready":"True"
	I1101 00:09:34.012433   26955 pod_ready.go:81] duration metric: took 4.969273ms waiting for pod "kube-controller-manager-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:34.012441   26955 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7kvtf" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:34.179849   26955 request.go:629] Waited for 167.337639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7kvtf
	I1101 00:09:34.179902   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7kvtf
	I1101 00:09:34.179906   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:34.179914   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:34.179920   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:34.183509   26955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:34.183536   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:34.183546   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:34.183554   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:34.183563   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:34.183571   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:34.183579   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:34 GMT
	I1101 00:09:34.183588   26955 round_trippers.go:580]     Audit-Id: b03cc70e-be84-4559-99ab-09d3b27cf39b
	I1101 00:09:34.184111   26955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7kvtf","generateName":"kube-proxy-","namespace":"kube-system","uid":"e2101b7f-e517-4100-905d-f46517e68255","resourceVersion":"469","creationTimestamp":"2023-11-01T00:09:23Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2d674cb3-a003-4ca9-a8b5-a283ae64b7c6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d674cb3-a003-4ca9-a8b5-a283ae64b7c6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5525 chars]
	I1101 00:09:34.379902   26955 request.go:629] Waited for 195.37554ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m02
	I1101 00:09:34.379969   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m02
	I1101 00:09:34.379974   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:34.379982   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:34.379987   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:34.382844   26955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:34.382858   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:34.382864   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:34 GMT
	I1101 00:09:34.382870   26955 round_trippers.go:580]     Audit-Id: 125eeef0-d793-4c18-992a-49104ff8f09f
	I1101 00:09:34.382875   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:34.382880   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:34.382886   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:34.382891   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:34.382896   26955 round_trippers.go:580]     Content-Length: 3726
	I1101 00:09:34.382956   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483-m02","uid":"5b2b1f13-2a35-43d5-86a5-bb5c1d6395e1","resourceVersion":"487","creationTimestamp":"2023-11-01T00:09:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2702 chars]
	I1101 00:09:34.383172   26955 pod_ready.go:92] pod "kube-proxy-7kvtf" in "kube-system" namespace has status "Ready":"True"
	I1101 00:09:34.383183   26955 pod_ready.go:81] duration metric: took 370.737834ms waiting for pod "kube-proxy-7kvtf" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:34.383191   26955 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tq28b" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:34.579649   26955 request.go:629] Waited for 196.40391ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tq28b
	I1101 00:09:34.579706   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tq28b
	I1101 00:09:34.579711   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:34.579718   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:34.579728   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:34.583270   26955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:34.583295   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:34.583303   26955 round_trippers.go:580]     Audit-Id: baa87d32-b1f3-4a60-8b0c-408062014fe7
	I1101 00:09:34.583308   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:34.583313   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:34.583319   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:34.583328   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:34.583338   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:34 GMT
	I1101 00:09:34.583848   26955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tq28b","generateName":"kube-proxy-","namespace":"kube-system","uid":"9534d8b8-4536-4a0a-8af5-440e6871a85f","resourceVersion":"372","creationTimestamp":"2023-11-01T00:08:42Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2d674cb3-a003-4ca9-a8b5-a283ae64b7c6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d674cb3-a003-4ca9-a8b5-a283ae64b7c6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5517 chars]
	I1101 00:09:34.779664   26955 request.go:629] Waited for 195.406462ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:09:34.779820   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:09:34.779847   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:34.779859   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:34.779877   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:34.782761   26955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:34.782779   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:34.782787   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:34 GMT
	I1101 00:09:34.782792   26955 round_trippers.go:580]     Audit-Id: d94fb905-534b-4bfa-9f9f-a082cba8846a
	I1101 00:09:34.782797   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:34.782802   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:34.782807   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:34.782812   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:34.782970   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"382","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 5906 chars]
	I1101 00:09:34.783318   26955 pod_ready.go:92] pod "kube-proxy-tq28b" in "kube-system" namespace has status "Ready":"True"
	I1101 00:09:34.783332   26955 pod_ready.go:81] duration metric: took 400.135302ms waiting for pod "kube-proxy-tq28b" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:34.783346   26955 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:34.979537   26955 request.go:629] Waited for 196.114673ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-600483
	I1101 00:09:34.979593   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-600483
	I1101 00:09:34.979598   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:34.979605   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:34.979611   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:34.983116   26955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:34.983136   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:34.983143   26955 round_trippers.go:580]     Audit-Id: 6d013bfb-bd60-4e80-b0d8-88b2067d2458
	I1101 00:09:34.983149   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:34.983154   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:34.983159   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:34.983165   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:34.983179   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:34 GMT
	I1101 00:09:34.983394   26955 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-600483","namespace":"kube-system","uid":"9cdd0be5-035a-49f5-8796-831ebde28bf0","resourceVersion":"295","creationTimestamp":"2023-11-01T00:08:30Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"01c4e8f68a00a3553dcff3388cb56149","kubernetes.io/config.mirror":"01c4e8f68a00a3553dcff3388cb56149","kubernetes.io/config.seen":"2023-11-01T00:08:30.293495470Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1101 00:09:35.179408   26955 request.go:629] Waited for 195.661558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:09:35.179497   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:09:35.179508   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:35.179517   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:35.179528   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:35.182155   26955 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:35.182177   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:35.182186   26955 round_trippers.go:580]     Audit-Id: b2cce969-6407-481f-97d2-2b7d97b2ab11
	I1101 00:09:35.182194   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:35.182204   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:35.182212   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:35.182220   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:35.182229   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:35 GMT
	I1101 00:09:35.182419   26955 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"382","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 5906 chars]
	I1101 00:09:35.182749   26955 pod_ready.go:92] pod "kube-scheduler-multinode-600483" in "kube-system" namespace has status "Ready":"True"
	I1101 00:09:35.182763   26955 pod_ready.go:81] duration metric: took 399.406733ms waiting for pod "kube-scheduler-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:35.182773   26955 pod_ready.go:38] duration metric: took 1.19982634s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 00:09:35.182794   26955 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 00:09:35.182838   26955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 00:09:35.196250   26955 system_svc.go:56] duration metric: took 13.446351ms WaitForService to wait for kubelet.
	I1101 00:09:35.196281   26955 kubeadm.go:581] duration metric: took 10.74380803s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1101 00:09:35.196303   26955 node_conditions.go:102] verifying NodePressure condition ...
	I1101 00:09:35.379776   26955 request.go:629] Waited for 183.400832ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/nodes
	I1101 00:09:35.379874   26955 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes
	I1101 00:09:35.379882   26955 round_trippers.go:469] Request Headers:
	I1101 00:09:35.379893   26955 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:35.379902   26955 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:35.382991   26955 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:35.383013   26955 round_trippers.go:577] Response Headers:
	I1101 00:09:35.383020   26955 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:35 GMT
	I1101 00:09:35.383025   26955 round_trippers.go:580]     Audit-Id: c693e5af-790b-4af4-a38b-ddaaaa3fed8b
	I1101 00:09:35.383030   26955 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:35.383035   26955 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:35.383042   26955 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:09:35.383047   26955 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:09:35.383245   26955 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"488"},"items":[{"metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"382","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"manage
dFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1"," [truncated 9653 chars]
	I1101 00:09:35.383818   26955 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 00:09:35.383848   26955 node_conditions.go:123] node cpu capacity is 2
	I1101 00:09:35.383860   26955 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 00:09:35.383867   26955 node_conditions.go:123] node cpu capacity is 2
	I1101 00:09:35.383874   26955 node_conditions.go:105] duration metric: took 187.566254ms to run NodePressure ...
	I1101 00:09:35.383887   26955 start.go:228] waiting for startup goroutines ...
	I1101 00:09:35.383918   26955 start.go:242] writing updated cluster config ...
	I1101 00:09:35.384316   26955 ssh_runner.go:195] Run: rm -f paused
	I1101 00:09:35.436495   26955 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1101 00:09:35.440173   26955 out.go:177] * Done! kubectl is now configured to use "multinode-600483" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-11-01 00:07:56 UTC, ends at Wed 2023-11-01 00:09:44 UTC. --
	Nov 01 00:09:44 multinode-600483 crio[714]: time="2023-11-01 00:09:44.134401841Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698797384134387529,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=8b59d926-1578-49ff-8ecb-d8cc4294429b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 00:09:44 multinode-600483 crio[714]: time="2023-11-01 00:09:44.135071876Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=da5da700-0b85-498a-bf8f-7b3d20375b93 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:09:44 multinode-600483 crio[714]: time="2023-11-01 00:09:44.135149333Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=da5da700-0b85-498a-bf8f-7b3d20375b93 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:09:44 multinode-600483 crio[714]: time="2023-11-01 00:09:44.135363001Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5eaa61a3e16e2fe51d229875488cd958e6f01bb47549e1545680f606110d3f9f,PodSandboxId:bc07c247e541f0fe0f62fca03401b531f5df1c9e86df6cdcac4daa36364522d4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1698797379620276754,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-8pjvd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85bd3938-9131-4eed-b6f7-7a4cd85f2cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 21c8e4ef,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ab98820acdbdaa4bae0e2f49ea04df3d27b1f7d4f42ec0a2e0ccd0eb2fef990,PodSandboxId:a4a9e680c266d7ba2d4910b2f93a56881d3427a21f9c117cc06370dae7a5370a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698797328445214931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rpvvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ab0ebb-aa1f-4143-b987-6c1ae065954a,},Annotations:map[string]string{io.kubernetes.container.hash: b7412969,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d49aa58556c6c2a081c344020d8314d1388b002209769f4f854d1f614d5581b0,PodSandboxId:ed69e30f3ec4acb4fc0bc378084bd3be173052cdbbb65ed7896c162e4f1e4aea,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698797328298853246,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: a67f136b-7645-4eb9-9568-52e3ab06d66e,},Annotations:map[string]string{io.kubernetes.container.hash: b02dd2ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29d3d1e43a49df27ff79c0633fd13d6f1d8d20b3c59c4a86a3483d28aad39bfb,PodSandboxId:d0319ac2f1bfec960228eef94f83ba50b1c35818206322ab2ab7676486dd4d65,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1698797325827701664,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l75r4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: abfa8ec3-0565-4927-a07c-9fed1240d270,},Annotations:map[string]string{io.kubernetes.container.hash: 616a3e1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e904d068b3d9b7ce573723b66379b5bf893f88b3b12ea08c7082b9a29451375,PodSandboxId:896370a9d61b94d910afef3f82a11d9335c4322ed8f75e169449957c32fed433,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698797323930017279,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tq28b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9534d8b8-4536-4a0a-8af5-440e68
71a85f,},Annotations:map[string]string{io.kubernetes.container.hash: b785be7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4c6afbd4c17e624c883aeb25047b01869799d02b12706a313d450067d13f80f,PodSandboxId:4a883846f810919e07b5be7459f04f17a4bb97c52d04f3ad2c3a2e567f913436,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698797302294814302,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-600483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5629fb0a0414e85632f97c416152ffbb,},Annotations:map[string]string{io.kubernetes.
container.hash: 4ec05eac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a2eeeec3f971ac8256579c0482bb184327045a76a8f599cee6d39fbf05cfead,PodSandboxId:6ce0157f497f57cc57984492e61b72fa3c3ed52998483d1ce4abfa91fe59d4ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698797301767042112,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-600483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01c4e8f68a00a3553dcff3388cb56149,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fd20fddef2a020007a5fb923d3407c9c68eeafcea8d776edb3c34de9f7872ea,PodSandboxId:dc6b73fb94db8e328a653a86f6e69a962bec9581b44112384961700f307c3a82,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698797301722821478,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-600483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2b1fcba8b34b1f65e600fae0bd4374a,},Annotations:map[string]string{io
.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52df3596b4dbf5cbd15e7b446e5e8f49f1d8fba2c92717b82edea9d0c1323801,PodSandboxId:5021a32b3ae847f6a2caefdc3ba7e6191ed71f1a0054315b0c406da46f0e5a4c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698797301400328114,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-600483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99a9cda13526c350638742a7c7b2ba52,},Annotations:map[string]string{io.kubernetes.
container.hash: 7bfab165,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=da5da700-0b85-498a-bf8f-7b3d20375b93 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:09:44 multinode-600483 crio[714]: time="2023-11-01 00:09:44.175904371Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=31a7e4d0-6b6b-4ede-b19d-03a0bdc9ccfe name=/runtime.v1.RuntimeService/Version
	Nov 01 00:09:44 multinode-600483 crio[714]: time="2023-11-01 00:09:44.175962785Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=31a7e4d0-6b6b-4ede-b19d-03a0bdc9ccfe name=/runtime.v1.RuntimeService/Version
	Nov 01 00:09:44 multinode-600483 crio[714]: time="2023-11-01 00:09:44.177112087Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=24eb2b9b-ce8a-4d6f-a2e1-a42d975d7aa5 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 00:09:44 multinode-600483 crio[714]: time="2023-11-01 00:09:44.177471664Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698797384177460994,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=24eb2b9b-ce8a-4d6f-a2e1-a42d975d7aa5 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 00:09:44 multinode-600483 crio[714]: time="2023-11-01 00:09:44.178027197Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d92e05ed-6fdc-4177-83c5-ab1f94ba587f name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:09:44 multinode-600483 crio[714]: time="2023-11-01 00:09:44.178080562Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d92e05ed-6fdc-4177-83c5-ab1f94ba587f name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:09:44 multinode-600483 crio[714]: time="2023-11-01 00:09:44.178259629Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5eaa61a3e16e2fe51d229875488cd958e6f01bb47549e1545680f606110d3f9f,PodSandboxId:bc07c247e541f0fe0f62fca03401b531f5df1c9e86df6cdcac4daa36364522d4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1698797379620276754,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-8pjvd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85bd3938-9131-4eed-b6f7-7a4cd85f2cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 21c8e4ef,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ab98820acdbdaa4bae0e2f49ea04df3d27b1f7d4f42ec0a2e0ccd0eb2fef990,PodSandboxId:a4a9e680c266d7ba2d4910b2f93a56881d3427a21f9c117cc06370dae7a5370a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698797328445214931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rpvvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ab0ebb-aa1f-4143-b987-6c1ae065954a,},Annotations:map[string]string{io.kubernetes.container.hash: b7412969,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d49aa58556c6c2a081c344020d8314d1388b002209769f4f854d1f614d5581b0,PodSandboxId:ed69e30f3ec4acb4fc0bc378084bd3be173052cdbbb65ed7896c162e4f1e4aea,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698797328298853246,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: a67f136b-7645-4eb9-9568-52e3ab06d66e,},Annotations:map[string]string{io.kubernetes.container.hash: b02dd2ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29d3d1e43a49df27ff79c0633fd13d6f1d8d20b3c59c4a86a3483d28aad39bfb,PodSandboxId:d0319ac2f1bfec960228eef94f83ba50b1c35818206322ab2ab7676486dd4d65,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1698797325827701664,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l75r4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: abfa8ec3-0565-4927-a07c-9fed1240d270,},Annotations:map[string]string{io.kubernetes.container.hash: 616a3e1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e904d068b3d9b7ce573723b66379b5bf893f88b3b12ea08c7082b9a29451375,PodSandboxId:896370a9d61b94d910afef3f82a11d9335c4322ed8f75e169449957c32fed433,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698797323930017279,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tq28b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9534d8b8-4536-4a0a-8af5-440e68
71a85f,},Annotations:map[string]string{io.kubernetes.container.hash: b785be7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4c6afbd4c17e624c883aeb25047b01869799d02b12706a313d450067d13f80f,PodSandboxId:4a883846f810919e07b5be7459f04f17a4bb97c52d04f3ad2c3a2e567f913436,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698797302294814302,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-600483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5629fb0a0414e85632f97c416152ffbb,},Annotations:map[string]string{io.kubernetes.
container.hash: 4ec05eac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a2eeeec3f971ac8256579c0482bb184327045a76a8f599cee6d39fbf05cfead,PodSandboxId:6ce0157f497f57cc57984492e61b72fa3c3ed52998483d1ce4abfa91fe59d4ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698797301767042112,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-600483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01c4e8f68a00a3553dcff3388cb56149,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fd20fddef2a020007a5fb923d3407c9c68eeafcea8d776edb3c34de9f7872ea,PodSandboxId:dc6b73fb94db8e328a653a86f6e69a962bec9581b44112384961700f307c3a82,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698797301722821478,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-600483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2b1fcba8b34b1f65e600fae0bd4374a,},Annotations:map[string]string{io
.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52df3596b4dbf5cbd15e7b446e5e8f49f1d8fba2c92717b82edea9d0c1323801,PodSandboxId:5021a32b3ae847f6a2caefdc3ba7e6191ed71f1a0054315b0c406da46f0e5a4c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698797301400328114,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-600483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99a9cda13526c350638742a7c7b2ba52,},Annotations:map[string]string{io.kubernetes.
container.hash: 7bfab165,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d92e05ed-6fdc-4177-83c5-ab1f94ba587f name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:09:44 multinode-600483 crio[714]: time="2023-11-01 00:09:44.218696649Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=361deca9-df89-44bf-bed5-0571811a7b5e name=/runtime.v1.RuntimeService/Version
	Nov 01 00:09:44 multinode-600483 crio[714]: time="2023-11-01 00:09:44.218762647Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=361deca9-df89-44bf-bed5-0571811a7b5e name=/runtime.v1.RuntimeService/Version
	Nov 01 00:09:44 multinode-600483 crio[714]: time="2023-11-01 00:09:44.219907684Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e7e0b575-1a71-4e14-8147-4157a378c0c9 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 00:09:44 multinode-600483 crio[714]: time="2023-11-01 00:09:44.220283918Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698797384220268462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=e7e0b575-1a71-4e14-8147-4157a378c0c9 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 00:09:44 multinode-600483 crio[714]: time="2023-11-01 00:09:44.220882451Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ad464b4e-ea5a-42d8-adf3-2a217c4ee469 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:09:44 multinode-600483 crio[714]: time="2023-11-01 00:09:44.220932560Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ad464b4e-ea5a-42d8-adf3-2a217c4ee469 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:09:44 multinode-600483 crio[714]: time="2023-11-01 00:09:44.221123075Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5eaa61a3e16e2fe51d229875488cd958e6f01bb47549e1545680f606110d3f9f,PodSandboxId:bc07c247e541f0fe0f62fca03401b531f5df1c9e86df6cdcac4daa36364522d4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1698797379620276754,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-8pjvd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85bd3938-9131-4eed-b6f7-7a4cd85f2cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 21c8e4ef,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ab98820acdbdaa4bae0e2f49ea04df3d27b1f7d4f42ec0a2e0ccd0eb2fef990,PodSandboxId:a4a9e680c266d7ba2d4910b2f93a56881d3427a21f9c117cc06370dae7a5370a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698797328445214931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rpvvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ab0ebb-aa1f-4143-b987-6c1ae065954a,},Annotations:map[string]string{io.kubernetes.container.hash: b7412969,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d49aa58556c6c2a081c344020d8314d1388b002209769f4f854d1f614d5581b0,PodSandboxId:ed69e30f3ec4acb4fc0bc378084bd3be173052cdbbb65ed7896c162e4f1e4aea,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698797328298853246,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: a67f136b-7645-4eb9-9568-52e3ab06d66e,},Annotations:map[string]string{io.kubernetes.container.hash: b02dd2ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29d3d1e43a49df27ff79c0633fd13d6f1d8d20b3c59c4a86a3483d28aad39bfb,PodSandboxId:d0319ac2f1bfec960228eef94f83ba50b1c35818206322ab2ab7676486dd4d65,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1698797325827701664,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l75r4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: abfa8ec3-0565-4927-a07c-9fed1240d270,},Annotations:map[string]string{io.kubernetes.container.hash: 616a3e1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e904d068b3d9b7ce573723b66379b5bf893f88b3b12ea08c7082b9a29451375,PodSandboxId:896370a9d61b94d910afef3f82a11d9335c4322ed8f75e169449957c32fed433,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698797323930017279,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tq28b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9534d8b8-4536-4a0a-8af5-440e68
71a85f,},Annotations:map[string]string{io.kubernetes.container.hash: b785be7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4c6afbd4c17e624c883aeb25047b01869799d02b12706a313d450067d13f80f,PodSandboxId:4a883846f810919e07b5be7459f04f17a4bb97c52d04f3ad2c3a2e567f913436,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698797302294814302,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-600483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5629fb0a0414e85632f97c416152ffbb,},Annotations:map[string]string{io.kubernetes.
container.hash: 4ec05eac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a2eeeec3f971ac8256579c0482bb184327045a76a8f599cee6d39fbf05cfead,PodSandboxId:6ce0157f497f57cc57984492e61b72fa3c3ed52998483d1ce4abfa91fe59d4ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698797301767042112,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-600483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01c4e8f68a00a3553dcff3388cb56149,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fd20fddef2a020007a5fb923d3407c9c68eeafcea8d776edb3c34de9f7872ea,PodSandboxId:dc6b73fb94db8e328a653a86f6e69a962bec9581b44112384961700f307c3a82,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698797301722821478,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-600483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2b1fcba8b34b1f65e600fae0bd4374a,},Annotations:map[string]string{io
.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52df3596b4dbf5cbd15e7b446e5e8f49f1d8fba2c92717b82edea9d0c1323801,PodSandboxId:5021a32b3ae847f6a2caefdc3ba7e6191ed71f1a0054315b0c406da46f0e5a4c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698797301400328114,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-600483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99a9cda13526c350638742a7c7b2ba52,},Annotations:map[string]string{io.kubernetes.
container.hash: 7bfab165,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ad464b4e-ea5a-42d8-adf3-2a217c4ee469 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:09:44 multinode-600483 crio[714]: time="2023-11-01 00:09:44.262967235Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=45c36fad-b71c-44fb-8f83-bcf3d461eb81 name=/runtime.v1.RuntimeService/Version
	Nov 01 00:09:44 multinode-600483 crio[714]: time="2023-11-01 00:09:44.263039374Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=45c36fad-b71c-44fb-8f83-bcf3d461eb81 name=/runtime.v1.RuntimeService/Version
	Nov 01 00:09:44 multinode-600483 crio[714]: time="2023-11-01 00:09:44.264576628Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ad3f9e6d-e7b7-4e8f-ad7d-b304dd7c6557 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 00:09:44 multinode-600483 crio[714]: time="2023-11-01 00:09:44.265055568Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698797384265037504,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=ad3f9e6d-e7b7-4e8f-ad7d-b304dd7c6557 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 00:09:44 multinode-600483 crio[714]: time="2023-11-01 00:09:44.265836621Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b095f9e9-ab91-4270-9516-5373666bc9fb name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:09:44 multinode-600483 crio[714]: time="2023-11-01 00:09:44.265886749Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b095f9e9-ab91-4270-9516-5373666bc9fb name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:09:44 multinode-600483 crio[714]: time="2023-11-01 00:09:44.266098710Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5eaa61a3e16e2fe51d229875488cd958e6f01bb47549e1545680f606110d3f9f,PodSandboxId:bc07c247e541f0fe0f62fca03401b531f5df1c9e86df6cdcac4daa36364522d4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1698797379620276754,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-8pjvd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85bd3938-9131-4eed-b6f7-7a4cd85f2cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 21c8e4ef,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ab98820acdbdaa4bae0e2f49ea04df3d27b1f7d4f42ec0a2e0ccd0eb2fef990,PodSandboxId:a4a9e680c266d7ba2d4910b2f93a56881d3427a21f9c117cc06370dae7a5370a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698797328445214931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rpvvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ab0ebb-aa1f-4143-b987-6c1ae065954a,},Annotations:map[string]string{io.kubernetes.container.hash: b7412969,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d49aa58556c6c2a081c344020d8314d1388b002209769f4f854d1f614d5581b0,PodSandboxId:ed69e30f3ec4acb4fc0bc378084bd3be173052cdbbb65ed7896c162e4f1e4aea,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698797328298853246,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: a67f136b-7645-4eb9-9568-52e3ab06d66e,},Annotations:map[string]string{io.kubernetes.container.hash: b02dd2ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29d3d1e43a49df27ff79c0633fd13d6f1d8d20b3c59c4a86a3483d28aad39bfb,PodSandboxId:d0319ac2f1bfec960228eef94f83ba50b1c35818206322ab2ab7676486dd4d65,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1698797325827701664,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l75r4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: abfa8ec3-0565-4927-a07c-9fed1240d270,},Annotations:map[string]string{io.kubernetes.container.hash: 616a3e1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e904d068b3d9b7ce573723b66379b5bf893f88b3b12ea08c7082b9a29451375,PodSandboxId:896370a9d61b94d910afef3f82a11d9335c4322ed8f75e169449957c32fed433,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698797323930017279,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tq28b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9534d8b8-4536-4a0a-8af5-440e68
71a85f,},Annotations:map[string]string{io.kubernetes.container.hash: b785be7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4c6afbd4c17e624c883aeb25047b01869799d02b12706a313d450067d13f80f,PodSandboxId:4a883846f810919e07b5be7459f04f17a4bb97c52d04f3ad2c3a2e567f913436,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698797302294814302,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-600483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5629fb0a0414e85632f97c416152ffbb,},Annotations:map[string]string{io.kubernetes.
container.hash: 4ec05eac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a2eeeec3f971ac8256579c0482bb184327045a76a8f599cee6d39fbf05cfead,PodSandboxId:6ce0157f497f57cc57984492e61b72fa3c3ed52998483d1ce4abfa91fe59d4ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698797301767042112,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-600483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01c4e8f68a00a3553dcff3388cb56149,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fd20fddef2a020007a5fb923d3407c9c68eeafcea8d776edb3c34de9f7872ea,PodSandboxId:dc6b73fb94db8e328a653a86f6e69a962bec9581b44112384961700f307c3a82,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698797301722821478,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-600483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2b1fcba8b34b1f65e600fae0bd4374a,},Annotations:map[string]string{io
.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52df3596b4dbf5cbd15e7b446e5e8f49f1d8fba2c92717b82edea9d0c1323801,PodSandboxId:5021a32b3ae847f6a2caefdc3ba7e6191ed71f1a0054315b0c406da46f0e5a4c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698797301400328114,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-600483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99a9cda13526c350638742a7c7b2ba52,},Annotations:map[string]string{io.kubernetes.
container.hash: 7bfab165,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b095f9e9-ab91-4270-9516-5373666bc9fb name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	5eaa61a3e16e2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   bc07c247e541f       busybox-5bc68d56bd-8pjvd
	1ab98820acdbd       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      55 seconds ago       Running             coredns                   0                   a4a9e680c266d       coredns-5dd5756b68-rpvvn
	d49aa58556c6c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      56 seconds ago       Running             storage-provisioner       0                   ed69e30f3ec4a       storage-provisioner
	29d3d1e43a49d       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      58 seconds ago       Running             kindnet-cni               0                   d0319ac2f1bfe       kindnet-l75r4
	1e904d068b3d9       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                      About a minute ago   Running             kube-proxy                0                   896370a9d61b9       kube-proxy-tq28b
	a4c6afbd4c17e       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   4a883846f8109       etcd-multinode-600483
	4a2eeeec3f971       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                      About a minute ago   Running             kube-scheduler            0                   6ce0157f497f5       kube-scheduler-multinode-600483
	3fd20fddef2a0       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                      About a minute ago   Running             kube-controller-manager   0                   dc6b73fb94db8       kube-controller-manager-multinode-600483
	52df3596b4dbf       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                      About a minute ago   Running             kube-apiserver            0                   5021a32b3ae84       kube-apiserver-multinode-600483
	
	* 
	* ==> coredns [1ab98820acdbdaa4bae0e2f49ea04df3d27b1f7d4f42ec0a2e0ccd0eb2fef990] <==
	* [INFO] 10.244.1.2:44964 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000239714s
	[INFO] 10.244.0.3:60748 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123858s
	[INFO] 10.244.0.3:56626 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001845169s
	[INFO] 10.244.0.3:53073 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000093371s
	[INFO] 10.244.0.3:55948 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000125667s
	[INFO] 10.244.0.3:32950 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00134791s
	[INFO] 10.244.0.3:57958 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000065762s
	[INFO] 10.244.0.3:51616 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082258s
	[INFO] 10.244.0.3:44642 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00005654s
	[INFO] 10.244.1.2:50827 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141095s
	[INFO] 10.244.1.2:45588 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000095159s
	[INFO] 10.244.1.2:57874 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079954s
	[INFO] 10.244.1.2:48546 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086255s
	[INFO] 10.244.0.3:46079 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000079159s
	[INFO] 10.244.0.3:47566 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115762s
	[INFO] 10.244.0.3:45267 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000042379s
	[INFO] 10.244.0.3:59422 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000039776s
	[INFO] 10.244.1.2:51058 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011647s
	[INFO] 10.244.1.2:59680 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000607161s
	[INFO] 10.244.1.2:42509 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00010419s
	[INFO] 10.244.1.2:46240 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000098425s
	[INFO] 10.244.0.3:55397 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000192665s
	[INFO] 10.244.0.3:54293 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000035149s
	[INFO] 10.244.0.3:46742 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000049005s
	[INFO] 10.244.0.3:45975 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000031471s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-600483
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-600483
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9
	                    minikube.k8s.io/name=multinode-600483
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_01T00_08_31_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Nov 2023 00:08:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-600483
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Nov 2023 00:09:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Nov 2023 00:08:47 +0000   Wed, 01 Nov 2023 00:08:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Nov 2023 00:08:47 +0000   Wed, 01 Nov 2023 00:08:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Nov 2023 00:08:47 +0000   Wed, 01 Nov 2023 00:08:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Nov 2023 00:08:47 +0000   Wed, 01 Nov 2023 00:08:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.130
	  Hostname:    multinode-600483
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 c90a2c78054c41b8a38c897c10fb049f
	  System UUID:                c90a2c78-054c-41b8-a38c-897c10fb049f
	  Boot ID:                    ecd3d791-d0e3-4135-8f62-73cc41848403
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-8pjvd                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 coredns-5dd5756b68-rpvvn                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     62s
	  kube-system                 etcd-multinode-600483                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         74s
	  kube-system                 kindnet-l75r4                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      62s
	  kube-system                 kube-apiserver-multinode-600483             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kube-controller-manager-multinode-600483    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-proxy-tq28b                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kube-system                 kube-scheduler-multinode-600483             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 60s   kube-proxy       
	  Normal  Starting                 74s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  74s   kubelet          Node multinode-600483 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    74s   kubelet          Node multinode-600483 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     74s   kubelet          Node multinode-600483 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  74s   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           63s   node-controller  Node multinode-600483 event: Registered Node multinode-600483 in Controller
	  Normal  NodeReady                57s   kubelet          Node multinode-600483 status is now: NodeReady
	
	
	Name:               multinode-600483-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-600483-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Nov 2023 00:09:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-600483-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Nov 2023 00:09:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Nov 2023 00:09:33 +0000   Wed, 01 Nov 2023 00:09:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Nov 2023 00:09:33 +0000   Wed, 01 Nov 2023 00:09:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Nov 2023 00:09:33 +0000   Wed, 01 Nov 2023 00:09:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Nov 2023 00:09:33 +0000   Wed, 01 Nov 2023 00:09:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.109
	  Hostname:    multinode-600483-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 f78e4f3cc00e410291c60e98e0c3e140
	  System UUID:                f78e4f3c-c00e-4102-91c6-0e98e0c3e140
	  Boot ID:                    6c188077-8669-4787-ac18-4d43e70351fa
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-6jjms    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-d4f6q               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21s
	  kube-system                 kube-proxy-7kvtf            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16s                kube-proxy       
	  Normal  NodeHasSufficientMemory  21s (x5 over 23s)  kubelet          Node multinode-600483-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x5 over 23s)  kubelet          Node multinode-600483-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x5 over 23s)  kubelet          Node multinode-600483-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18s                node-controller  Node multinode-600483-m02 event: Registered Node multinode-600483-m02 in Controller
	  Normal  NodeReady                11s                kubelet          Node multinode-600483-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Nov 1 00:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.064631] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.348912] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.994887] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.137306] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Nov 1 00:08] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.657932] systemd-fstab-generator[639]: Ignoring "noauto" for root device
	[  +0.117859] systemd-fstab-generator[650]: Ignoring "noauto" for root device
	[  +0.149050] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.097542] systemd-fstab-generator[674]: Ignoring "noauto" for root device
	[  +0.226274] systemd-fstab-generator[698]: Ignoring "noauto" for root device
	[  +9.145039] systemd-fstab-generator[922]: Ignoring "noauto" for root device
	[ +10.283285] systemd-fstab-generator[1256]: Ignoring "noauto" for root device
	[ +19.513308] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [a4c6afbd4c17e624c883aeb25047b01869799d02b12706a313d450067d13f80f] <==
	* {"level":"info","ts":"2023-11-01T00:08:24.472276Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3bfdfb8084d9036b received MsgVoteResp from 3bfdfb8084d9036b at term 2"}
	{"level":"info","ts":"2023-11-01T00:08:24.472285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3bfdfb8084d9036b became leader at term 2"}
	{"level":"info","ts":"2023-11-01T00:08:24.472292Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3bfdfb8084d9036b elected leader 3bfdfb8084d9036b at term 2"}
	{"level":"info","ts":"2023-11-01T00:08:24.473937Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"3bfdfb8084d9036b","local-member-attributes":"{Name:multinode-600483 ClientURLs:[https://192.168.39.130:2379]}","request-path":"/0/members/3bfdfb8084d9036b/attributes","cluster-id":"b31a7968a7efeeee","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-01T00:08:24.473992Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-01T00:08:24.474091Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-01T00:08:24.475264Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.130:2379"}
	{"level":"info","ts":"2023-11-01T00:08:24.475267Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-01T00:08:24.475519Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T00:08:24.475911Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-01T00:08:24.475946Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-01T00:08:24.476802Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b31a7968a7efeeee","local-member-id":"3bfdfb8084d9036b","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T00:08:24.47693Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T00:08:24.476976Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T00:09:26.998072Z","caller":"traceutil/trace.go:171","msg":"trace[1263563057] linearizableReadLoop","detail":"{readStateIndex:484; appliedIndex:483; }","duration":"167.346233ms","start":"2023-11-01T00:09:26.830692Z","end":"2023-11-01T00:09:26.998038Z","steps":["trace[1263563057] 'read index received'  (duration: 124.168733ms)","trace[1263563057] 'applied index is now lower than readState.Index'  (duration: 43.176724ms)"],"step_count":2}
	{"level":"info","ts":"2023-11-01T00:09:26.998309Z","caller":"traceutil/trace.go:171","msg":"trace[400036148] transaction","detail":"{read_only:false; response_revision:463; number_of_response:1; }","duration":"237.873215ms","start":"2023-11-01T00:09:26.760419Z","end":"2023-11-01T00:09:26.998292Z","steps":["trace[400036148] 'process raft request'  (duration: 194.485859ms)","trace[400036148] 'compare'  (duration: 43.010774ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-01T00:09:26.998478Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"167.755773ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2023-11-01T00:09:26.998604Z","caller":"traceutil/trace.go:171","msg":"trace[1290782095] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:463; }","duration":"167.960888ms","start":"2023-11-01T00:09:26.83063Z","end":"2023-11-01T00:09:26.998591Z","steps":["trace[1290782095] 'agreement among raft nodes before linearized reading'  (duration: 167.715005ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-01T00:09:27.231058Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.491163ms","expected-duration":"100ms","prefix":"","request":"header:<ID:246444021754931133 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:460 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-11-01T00:09:27.231176Z","caller":"traceutil/trace.go:171","msg":"trace[1829653147] linearizableReadLoop","detail":"{readStateIndex:485; appliedIndex:484; }","duration":"124.624137ms","start":"2023-11-01T00:09:27.106541Z","end":"2023-11-01T00:09:27.231165Z","steps":["trace[1829653147] 'read index received'  (duration: 15.320373ms)","trace[1829653147] 'applied index is now lower than readState.Index'  (duration: 109.302098ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-01T00:09:27.231253Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.725625ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:2 size:7977"}
	{"level":"info","ts":"2023-11-01T00:09:27.231268Z","caller":"traceutil/trace.go:171","msg":"trace[126815648] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:2; response_revision:464; }","duration":"124.746049ms","start":"2023-11-01T00:09:27.106518Z","end":"2023-11-01T00:09:27.231264Z","steps":["trace[126815648] 'agreement among raft nodes before linearized reading'  (duration: 124.684845ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-01T00:09:27.23141Z","caller":"traceutil/trace.go:171","msg":"trace[1012079942] transaction","detail":"{read_only:false; response_revision:464; number_of_response:1; }","duration":"226.188329ms","start":"2023-11-01T00:09:27.005214Z","end":"2023-11-01T00:09:27.231402Z","steps":["trace[1012079942] 'process raft request'  (duration: 116.702649ms)","trace[1012079942] 'compare'  (duration: 108.385505ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-01T00:09:32.022734Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.863146ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/\" range_end:\"/registry/serviceaccounts0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-11-01T00:09:32.022862Z","caller":"traceutil/trace.go:171","msg":"trace[1219250649] range","detail":"{range_begin:/registry/serviceaccounts/; range_end:/registry/serviceaccounts0; response_count:0; response_revision:476; }","duration":"122.017599ms","start":"2023-11-01T00:09:31.900832Z","end":"2023-11-01T00:09:32.02285Z","steps":["trace[1219250649] 'count revisions from in-memory index tree'  (duration: 121.706081ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  00:09:44 up 1 min,  0 users,  load average: 0.48, 0.26, 0.09
	Linux multinode-600483 5.10.57 #1 SMP Tue Oct 31 22:14:31 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [29d3d1e43a49df27ff79c0633fd13d6f1d8d20b3c59c4a86a3483d28aad39bfb] <==
	* I1101 00:08:46.574784       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1101 00:08:46.575002       1 main.go:107] hostIP = 192.168.39.130
	podIP = 192.168.39.130
	I1101 00:08:46.575402       1 main.go:116] setting mtu 1500 for CNI 
	I1101 00:08:46.575437       1 main.go:146] kindnetd IP family: "ipv4"
	I1101 00:08:46.575460       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1101 00:08:47.072865       1 main.go:223] Handling node with IPs: map[192.168.39.130:{}]
	I1101 00:08:47.072910       1 main.go:227] handling current node
	I1101 00:08:57.080955       1 main.go:223] Handling node with IPs: map[192.168.39.130:{}]
	I1101 00:08:57.081067       1 main.go:227] handling current node
	I1101 00:09:07.089168       1 main.go:223] Handling node with IPs: map[192.168.39.130:{}]
	I1101 00:09:07.089225       1 main.go:227] handling current node
	I1101 00:09:17.095395       1 main.go:223] Handling node with IPs: map[192.168.39.130:{}]
	I1101 00:09:17.095531       1 main.go:227] handling current node
	I1101 00:09:27.233576       1 main.go:223] Handling node with IPs: map[192.168.39.130:{}]
	I1101 00:09:27.233635       1 main.go:227] handling current node
	I1101 00:09:27.233697       1 main.go:223] Handling node with IPs: map[192.168.39.109:{}]
	I1101 00:09:27.233707       1 main.go:250] Node multinode-600483-m02 has CIDR [10.244.1.0/24] 
	I1101 00:09:27.233967       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.109 Flags: [] Table: 0} 
	I1101 00:09:37.242908       1 main.go:223] Handling node with IPs: map[192.168.39.130:{}]
	I1101 00:09:37.242949       1 main.go:227] handling current node
	I1101 00:09:37.242966       1 main.go:223] Handling node with IPs: map[192.168.39.109:{}]
	I1101 00:09:37.242972       1 main.go:250] Node multinode-600483-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [52df3596b4dbf5cbd15e7b446e5e8f49f1d8fba2c92717b82edea9d0c1323801] <==
	* I1101 00:08:26.662184       1 aggregator.go:166] initial CRD sync complete...
	I1101 00:08:26.662191       1 autoregister_controller.go:141] Starting autoregister controller
	I1101 00:08:26.662196       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 00:08:26.662201       1 cache.go:39] Caches are synced for autoregister controller
	I1101 00:08:26.664982       1 controller.go:624] quota admission added evaluator for: namespaces
	I1101 00:08:26.665796       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1101 00:08:26.667808       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1101 00:08:26.667840       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1101 00:08:26.677114       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1101 00:08:26.709547       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 00:08:27.567432       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 00:08:27.572604       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 00:08:27.572780       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 00:08:28.297874       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 00:08:28.342053       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 00:08:28.479221       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 00:08:28.494801       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.130]
	I1101 00:08:28.495890       1 controller.go:624] quota admission added evaluator for: endpoints
	I1101 00:08:28.502817       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 00:08:28.627143       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1101 00:08:30.220820       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1101 00:08:30.251690       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 00:08:30.262210       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1101 00:08:42.277018       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1101 00:08:42.373477       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [3fd20fddef2a020007a5fb923d3407c9c68eeafcea8d776edb3c34de9f7872ea] <==
	* I1101 00:08:47.447833       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="119.902µs"
	I1101 00:08:47.471437       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="625.929µs"
	I1101 00:08:49.551965       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="165.74µs"
	I1101 00:08:49.589321       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.435749ms"
	I1101 00:08:49.589513       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.493µs"
	I1101 00:08:51.715860       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1101 00:09:23.487475       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-600483-m02\" does not exist"
	I1101 00:09:23.514131       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-d4f6q"
	I1101 00:09:23.527246       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-600483-m02" podCIDRs=["10.244.1.0/24"]
	I1101 00:09:23.527866       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-7kvtf"
	I1101 00:09:26.721893       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-600483-m02"
	I1101 00:09:26.722094       1 event.go:307] "Event occurred" object="multinode-600483-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-600483-m02 event: Registered Node multinode-600483-m02 in Controller"
	I1101 00:09:33.769577       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-600483-m02"
	I1101 00:09:36.162478       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1101 00:09:36.180249       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-6jjms"
	I1101 00:09:36.188912       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-8pjvd"
	I1101 00:09:36.216002       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="53.141456ms"
	I1101 00:09:36.238769       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="22.650246ms"
	I1101 00:09:36.238867       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="49.581µs"
	I1101 00:09:36.239143       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="72.273µs"
	I1101 00:09:36.740197       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-6jjms" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-6jjms"
	I1101 00:09:40.259534       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.160568ms"
	I1101 00:09:40.259631       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="39.609µs"
	I1101 00:09:40.721287       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="7.352945ms"
	I1101 00:09:40.721417       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="38.065µs"
	
	* 
	* ==> kube-proxy [1e904d068b3d9b7ce573723b66379b5bf893f88b3b12ea08c7082b9a29451375] <==
	* I1101 00:08:44.115214       1 server_others.go:69] "Using iptables proxy"
	I1101 00:08:44.134046       1 node.go:141] Successfully retrieved node IP: 192.168.39.130
	I1101 00:08:44.184717       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1101 00:08:44.184800       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 00:08:44.187480       1 server_others.go:152] "Using iptables Proxier"
	I1101 00:08:44.187571       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 00:08:44.187825       1 server.go:846] "Version info" version="v1.28.3"
	I1101 00:08:44.188020       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 00:08:44.188863       1 config.go:188] "Starting service config controller"
	I1101 00:08:44.188935       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 00:08:44.188983       1 config.go:97] "Starting endpoint slice config controller"
	I1101 00:08:44.188999       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 00:08:44.189781       1 config.go:315] "Starting node config controller"
	I1101 00:08:44.189827       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 00:08:44.289567       1 shared_informer.go:318] Caches are synced for service config
	I1101 00:08:44.289430       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1101 00:08:44.290071       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [4a2eeeec3f971ac8256579c0482bb184327045a76a8f599cee6d39fbf05cfead] <==
	* W1101 00:08:26.702797       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1101 00:08:26.703600       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1101 00:08:26.703432       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1101 00:08:26.703734       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1101 00:08:26.703881       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1101 00:08:26.703955       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1101 00:08:26.706896       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1101 00:08:26.706945       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1101 00:08:27.564785       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1101 00:08:27.564833       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1101 00:08:27.640266       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1101 00:08:27.640349       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1101 00:08:27.703721       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1101 00:08:27.703770       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1101 00:08:27.875743       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1101 00:08:27.875852       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1101 00:08:27.942453       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1101 00:08:27.942622       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1101 00:08:27.948418       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1101 00:08:27.948503       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1101 00:08:27.981966       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1101 00:08:27.982074       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1101 00:08:28.124210       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1101 00:08:28.124260       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1101 00:08:31.066890       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-11-01 00:07:56 UTC, ends at Wed 2023-11-01 00:09:44 UTC. --
	Nov 01 00:08:42 multinode-600483 kubelet[1263]: I1101 00:08:42.505815    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/abfa8ec3-0565-4927-a07c-9fed1240d270-cni-cfg\") pod \"kindnet-l75r4\" (UID: \"abfa8ec3-0565-4927-a07c-9fed1240d270\") " pod="kube-system/kindnet-l75r4"
	Nov 01 00:08:42 multinode-600483 kubelet[1263]: I1101 00:08:42.505839    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/abfa8ec3-0565-4927-a07c-9fed1240d270-lib-modules\") pod \"kindnet-l75r4\" (UID: \"abfa8ec3-0565-4927-a07c-9fed1240d270\") " pod="kube-system/kindnet-l75r4"
	Nov 01 00:08:42 multinode-600483 kubelet[1263]: I1101 00:08:42.505864    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9534d8b8-4536-4a0a-8af5-440e6871a85f-lib-modules\") pod \"kube-proxy-tq28b\" (UID: \"9534d8b8-4536-4a0a-8af5-440e6871a85f\") " pod="kube-system/kube-proxy-tq28b"
	Nov 01 00:08:42 multinode-600483 kubelet[1263]: I1101 00:08:42.505885    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9534d8b8-4536-4a0a-8af5-440e6871a85f-xtables-lock\") pod \"kube-proxy-tq28b\" (UID: \"9534d8b8-4536-4a0a-8af5-440e6871a85f\") " pod="kube-system/kube-proxy-tq28b"
	Nov 01 00:08:42 multinode-600483 kubelet[1263]: I1101 00:08:42.505903    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/abfa8ec3-0565-4927-a07c-9fed1240d270-xtables-lock\") pod \"kindnet-l75r4\" (UID: \"abfa8ec3-0565-4927-a07c-9fed1240d270\") " pod="kube-system/kindnet-l75r4"
	Nov 01 00:08:42 multinode-600483 kubelet[1263]: I1101 00:08:42.505924    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9534d8b8-4536-4a0a-8af5-440e6871a85f-kube-proxy\") pod \"kube-proxy-tq28b\" (UID: \"9534d8b8-4536-4a0a-8af5-440e6871a85f\") " pod="kube-system/kube-proxy-tq28b"
	Nov 01 00:08:42 multinode-600483 kubelet[1263]: I1101 00:08:42.505942    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb4z8\" (UniqueName: \"kubernetes.io/projected/abfa8ec3-0565-4927-a07c-9fed1240d270-kube-api-access-sb4z8\") pod \"kindnet-l75r4\" (UID: \"abfa8ec3-0565-4927-a07c-9fed1240d270\") " pod="kube-system/kindnet-l75r4"
	Nov 01 00:08:46 multinode-600483 kubelet[1263]: I1101 00:08:46.526728    1263 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-tq28b" podStartSLOduration=4.526640412 podCreationTimestamp="2023-11-01 00:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-01 00:08:44.521374222 +0000 UTC m=+14.329410278" watchObservedRunningTime="2023-11-01 00:08:46.526640412 +0000 UTC m=+16.334676468"
	Nov 01 00:08:47 multinode-600483 kubelet[1263]: I1101 00:08:47.408197    1263 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 01 00:08:47 multinode-600483 kubelet[1263]: I1101 00:08:47.444990    1263 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-l75r4" podStartSLOduration=5.444921106 podCreationTimestamp="2023-11-01 00:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-01 00:08:46.527843911 +0000 UTC m=+16.335879949" watchObservedRunningTime="2023-11-01 00:08:47.444921106 +0000 UTC m=+17.252957376"
	Nov 01 00:08:47 multinode-600483 kubelet[1263]: I1101 00:08:47.445245    1263 topology_manager.go:215] "Topology Admit Handler" podUID="d8ab0ebb-aa1f-4143-b987-6c1ae065954a" podNamespace="kube-system" podName="coredns-5dd5756b68-rpvvn"
	Nov 01 00:08:47 multinode-600483 kubelet[1263]: I1101 00:08:47.449430    1263 topology_manager.go:215] "Topology Admit Handler" podUID="a67f136b-7645-4eb9-9568-52e3ab06d66e" podNamespace="kube-system" podName="storage-provisioner"
	Nov 01 00:08:47 multinode-600483 kubelet[1263]: I1101 00:08:47.542036    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cgnx\" (UniqueName: \"kubernetes.io/projected/d8ab0ebb-aa1f-4143-b987-6c1ae065954a-kube-api-access-5cgnx\") pod \"coredns-5dd5756b68-rpvvn\" (UID: \"d8ab0ebb-aa1f-4143-b987-6c1ae065954a\") " pod="kube-system/coredns-5dd5756b68-rpvvn"
	Nov 01 00:08:47 multinode-600483 kubelet[1263]: I1101 00:08:47.542097    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a67f136b-7645-4eb9-9568-52e3ab06d66e-tmp\") pod \"storage-provisioner\" (UID: \"a67f136b-7645-4eb9-9568-52e3ab06d66e\") " pod="kube-system/storage-provisioner"
	Nov 01 00:08:47 multinode-600483 kubelet[1263]: I1101 00:08:47.542122    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8ab0ebb-aa1f-4143-b987-6c1ae065954a-config-volume\") pod \"coredns-5dd5756b68-rpvvn\" (UID: \"d8ab0ebb-aa1f-4143-b987-6c1ae065954a\") " pod="kube-system/coredns-5dd5756b68-rpvvn"
	Nov 01 00:08:47 multinode-600483 kubelet[1263]: I1101 00:08:47.542144    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdmcc\" (UniqueName: \"kubernetes.io/projected/a67f136b-7645-4eb9-9568-52e3ab06d66e-kube-api-access-hdmcc\") pod \"storage-provisioner\" (UID: \"a67f136b-7645-4eb9-9568-52e3ab06d66e\") " pod="kube-system/storage-provisioner"
	Nov 01 00:08:49 multinode-600483 kubelet[1263]: I1101 00:08:49.550042    1263 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-rpvvn" podStartSLOduration=7.550004492 podCreationTimestamp="2023-11-01 00:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-01 00:08:49.548294983 +0000 UTC m=+19.356331040" watchObservedRunningTime="2023-11-01 00:08:49.550004492 +0000 UTC m=+19.358040547"
	Nov 01 00:09:30 multinode-600483 kubelet[1263]: E1101 00:09:30.419445    1263 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 01 00:09:30 multinode-600483 kubelet[1263]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 01 00:09:30 multinode-600483 kubelet[1263]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 01 00:09:30 multinode-600483 kubelet[1263]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 01 00:09:36 multinode-600483 kubelet[1263]: I1101 00:09:36.204506    1263 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=53.204419198 podCreationTimestamp="2023-11-01 00:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-01 00:08:49.596616496 +0000 UTC m=+19.404652551" watchObservedRunningTime="2023-11-01 00:09:36.204419198 +0000 UTC m=+66.012455241"
	Nov 01 00:09:36 multinode-600483 kubelet[1263]: I1101 00:09:36.204849    1263 topology_manager.go:215] "Topology Admit Handler" podUID="85bd3938-9131-4eed-b6f7-7a4cd85f2cb9" podNamespace="default" podName="busybox-5bc68d56bd-8pjvd"
	Nov 01 00:09:36 multinode-600483 kubelet[1263]: I1101 00:09:36.228456    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ndz4\" (UniqueName: \"kubernetes.io/projected/85bd3938-9131-4eed-b6f7-7a4cd85f2cb9-kube-api-access-2ndz4\") pod \"busybox-5bc68d56bd-8pjvd\" (UID: \"85bd3938-9131-4eed-b6f7-7a4cd85f2cb9\") " pod="default/busybox-5bc68d56bd-8pjvd"
	Nov 01 00:09:40 multinode-600483 kubelet[1263]: I1101 00:09:40.716219    1263 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5bc68d56bd-8pjvd" podStartSLOduration=2.260465418 podCreationTimestamp="2023-11-01 00:09:36 +0000 UTC" firstStartedPulling="2023-11-01 00:09:37.140599419 +0000 UTC m=+66.948635455" lastFinishedPulling="2023-11-01 00:09:39.596313733 +0000 UTC m=+69.404349768" observedRunningTime="2023-11-01 00:09:40.716137636 +0000 UTC m=+70.524173689" watchObservedRunningTime="2023-11-01 00:09:40.716179731 +0000 UTC m=+70.524215786"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-600483 -n multinode-600483
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-600483 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.34s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (685.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-600483
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-600483
E1101 00:12:16.006859   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/functional-736766/client.crt: no such file or directory
E1101 00:13:02.507440   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt: no such file or directory
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-600483: exit status 82 (2m1.221236002s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-600483"  ...
	* Stopping node "multinode-600483"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:292: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-600483" : exit status 82
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-600483 --wait=true -v=8 --alsologtostderr
E1101 00:14:25.551118   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt: no such file or directory
E1101 00:15:35.092412   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt: no such file or directory
E1101 00:17:16.007132   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/functional-736766/client.crt: no such file or directory
E1101 00:18:02.507381   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt: no such file or directory
E1101 00:18:39.051485   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/functional-736766/client.crt: no such file or directory
E1101 00:20:35.092054   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt: no such file or directory
E1101 00:21:58.138489   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt: no such file or directory
E1101 00:22:16.006974   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/functional-736766/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-600483 --wait=true -v=8 --alsologtostderr: (9m21.465355261s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-600483
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-600483 -n multinode-600483
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-600483 logs -n 25: (1.6170561s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| ssh     | multinode-600483 ssh -n                                                                 | multinode-600483 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:10 UTC | 01 Nov 23 00:10 UTC |
	|         | multinode-600483-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-600483 cp multinode-600483-m02:/home/docker/cp-test.txt                       | multinode-600483 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:10 UTC | 01 Nov 23 00:10 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile219734398/001/cp-test_multinode-600483-m02.txt          |                  |         |                |                     |                     |
	| ssh     | multinode-600483 ssh -n                                                                 | multinode-600483 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:10 UTC | 01 Nov 23 00:10 UTC |
	|         | multinode-600483-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-600483 cp multinode-600483-m02:/home/docker/cp-test.txt                       | multinode-600483 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:10 UTC | 01 Nov 23 00:10 UTC |
	|         | multinode-600483:/home/docker/cp-test_multinode-600483-m02_multinode-600483.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-600483 ssh -n                                                                 | multinode-600483 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:10 UTC | 01 Nov 23 00:10 UTC |
	|         | multinode-600483-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-600483 ssh -n multinode-600483 sudo cat                                       | multinode-600483 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:10 UTC | 01 Nov 23 00:10 UTC |
	|         | /home/docker/cp-test_multinode-600483-m02_multinode-600483.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-600483 cp multinode-600483-m02:/home/docker/cp-test.txt                       | multinode-600483 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:10 UTC | 01 Nov 23 00:10 UTC |
	|         | multinode-600483-m03:/home/docker/cp-test_multinode-600483-m02_multinode-600483-m03.txt |                  |         |                |                     |                     |
	| ssh     | multinode-600483 ssh -n                                                                 | multinode-600483 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:10 UTC | 01 Nov 23 00:10 UTC |
	|         | multinode-600483-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-600483 ssh -n multinode-600483-m03 sudo cat                                   | multinode-600483 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:10 UTC | 01 Nov 23 00:10 UTC |
	|         | /home/docker/cp-test_multinode-600483-m02_multinode-600483-m03.txt                      |                  |         |                |                     |                     |
	| cp      | multinode-600483 cp testdata/cp-test.txt                                                | multinode-600483 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:10 UTC | 01 Nov 23 00:10 UTC |
	|         | multinode-600483-m03:/home/docker/cp-test.txt                                           |                  |         |                |                     |                     |
	| ssh     | multinode-600483 ssh -n                                                                 | multinode-600483 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:10 UTC | 01 Nov 23 00:10 UTC |
	|         | multinode-600483-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-600483 cp multinode-600483-m03:/home/docker/cp-test.txt                       | multinode-600483 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:10 UTC | 01 Nov 23 00:10 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile219734398/001/cp-test_multinode-600483-m03.txt          |                  |         |                |                     |                     |
	| ssh     | multinode-600483 ssh -n                                                                 | multinode-600483 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:10 UTC | 01 Nov 23 00:10 UTC |
	|         | multinode-600483-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-600483 cp multinode-600483-m03:/home/docker/cp-test.txt                       | multinode-600483 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:10 UTC | 01 Nov 23 00:10 UTC |
	|         | multinode-600483:/home/docker/cp-test_multinode-600483-m03_multinode-600483.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-600483 ssh -n                                                                 | multinode-600483 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:10 UTC | 01 Nov 23 00:10 UTC |
	|         | multinode-600483-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-600483 ssh -n multinode-600483 sudo cat                                       | multinode-600483 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:10 UTC | 01 Nov 23 00:10 UTC |
	|         | /home/docker/cp-test_multinode-600483-m03_multinode-600483.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-600483 cp multinode-600483-m03:/home/docker/cp-test.txt                       | multinode-600483 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:10 UTC | 01 Nov 23 00:10 UTC |
	|         | multinode-600483-m02:/home/docker/cp-test_multinode-600483-m03_multinode-600483-m02.txt |                  |         |                |                     |                     |
	| ssh     | multinode-600483 ssh -n                                                                 | multinode-600483 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:10 UTC | 01 Nov 23 00:10 UTC |
	|         | multinode-600483-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-600483 ssh -n multinode-600483-m02 sudo cat                                   | multinode-600483 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:10 UTC | 01 Nov 23 00:10 UTC |
	|         | /home/docker/cp-test_multinode-600483-m03_multinode-600483-m02.txt                      |                  |         |                |                     |                     |
	| node    | multinode-600483 node stop m03                                                          | multinode-600483 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:10 UTC | 01 Nov 23 00:10 UTC |
	| node    | multinode-600483 node start                                                             | multinode-600483 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:10 UTC | 01 Nov 23 00:11 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |                |                     |                     |
	| node    | list -p multinode-600483                                                                | multinode-600483 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:11 UTC |                     |
	| stop    | -p multinode-600483                                                                     | multinode-600483 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:11 UTC |                     |
	| start   | -p multinode-600483                                                                     | multinode-600483 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:13 UTC | 01 Nov 23 00:22 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |                |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |                |                     |                     |
	| node    | list -p multinode-600483                                                                | multinode-600483 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:22 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/01 00:13:12
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 00:13:12.980640   30437 out.go:296] Setting OutFile to fd 1 ...
	I1101 00:13:12.980783   30437 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:13:12.980791   30437 out.go:309] Setting ErrFile to fd 2...
	I1101 00:13:12.980795   30437 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:13:12.980996   30437 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7305/.minikube/bin
	I1101 00:13:12.981546   30437 out.go:303] Setting JSON to false
	I1101 00:13:12.982437   30437 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3338,"bootTime":1698794255,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 00:13:12.982503   30437 start.go:138] virtualization: kvm guest
	I1101 00:13:12.985113   30437 out.go:177] * [multinode-600483] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1101 00:13:12.986924   30437 out.go:177]   - MINIKUBE_LOCATION=17486
	I1101 00:13:12.986899   30437 notify.go:220] Checking for updates...
	I1101 00:13:12.989828   30437 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 00:13:12.991575   30437 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 00:13:12.993372   30437 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7305/.minikube
	I1101 00:13:12.995219   30437 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 00:13:12.996843   30437 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 00:13:12.998992   30437 config.go:182] Loaded profile config "multinode-600483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:13:12.999100   30437 driver.go:378] Setting default libvirt URI to qemu:///system
	I1101 00:13:12.999527   30437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1101 00:13:12.999592   30437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:13:13.015026   30437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35335
	I1101 00:13:13.015408   30437 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:13:13.015957   30437 main.go:141] libmachine: Using API Version  1
	I1101 00:13:13.015984   30437 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:13:13.016313   30437 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:13:13.016554   30437 main.go:141] libmachine: (multinode-600483) Calling .DriverName
	I1101 00:13:13.053941   30437 out.go:177] * Using the kvm2 driver based on existing profile
	I1101 00:13:13.055363   30437 start.go:298] selected driver: kvm2
	I1101 00:13:13.055378   30437 start.go:902] validating driver "kvm2" against &{Name:multinode-600483 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.3 ClusterName:multinode-600483 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.130 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.109 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.2 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false
ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:13:13.055508   30437 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 00:13:13.055819   30437 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:13:13.055886   30437 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17486-7305/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1101 00:13:13.070825   30437 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1101 00:13:13.071621   30437 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 00:13:13.071658   30437 cni.go:84] Creating CNI manager for ""
	I1101 00:13:13.071668   30437 cni.go:136] 3 nodes found, recommending kindnet
	I1101 00:13:13.071676   30437 start_flags.go:323] config:
	{Name:multinode-600483 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-600483 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.130 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.109 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.2 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-prov
isioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socket
VMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:13:13.071893   30437 iso.go:125] acquiring lock: {Name:mk1f649ca0b7c1ae293cd66cb85f9eeda028b20b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:13:13.073931   30437 out.go:177] * Starting control plane node multinode-600483 in cluster multinode-600483
	I1101 00:13:13.075447   30437 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 00:13:13.075495   30437 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1101 00:13:13.075505   30437 cache.go:56] Caching tarball of preloaded images
	I1101 00:13:13.075604   30437 preload.go:174] Found /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 00:13:13.075617   30437 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1101 00:13:13.075764   30437 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/config.json ...
	I1101 00:13:13.076041   30437 start.go:365] acquiring machines lock for multinode-600483: {Name:mk7aad88408c319111b9be8e59d9593a9e88374b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 00:13:13.076091   30437 start.go:369] acquired machines lock for "multinode-600483" in 28.047µs
	I1101 00:13:13.076109   30437 start.go:96] Skipping create...Using existing machine configuration
	I1101 00:13:13.076119   30437 fix.go:54] fixHost starting: 
	I1101 00:13:13.076371   30437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1101 00:13:13.076409   30437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:13:13.090641   30437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38121
	I1101 00:13:13.091139   30437 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:13:13.091612   30437 main.go:141] libmachine: Using API Version  1
	I1101 00:13:13.091638   30437 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:13:13.092038   30437 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:13:13.092236   30437 main.go:141] libmachine: (multinode-600483) Calling .DriverName
	I1101 00:13:13.092401   30437 main.go:141] libmachine: (multinode-600483) Calling .GetState
	I1101 00:13:13.094323   30437 fix.go:102] recreateIfNeeded on multinode-600483: state=Running err=<nil>
	W1101 00:13:13.094348   30437 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 00:13:13.096699   30437 out.go:177] * Updating the running kvm2 "multinode-600483" VM ...
	I1101 00:13:13.098295   30437 machine.go:88] provisioning docker machine ...
	I1101 00:13:13.098320   30437 main.go:141] libmachine: (multinode-600483) Calling .DriverName
	I1101 00:13:13.098606   30437 main.go:141] libmachine: (multinode-600483) Calling .GetMachineName
	I1101 00:13:13.098766   30437 buildroot.go:166] provisioning hostname "multinode-600483"
	I1101 00:13:13.098784   30437 main.go:141] libmachine: (multinode-600483) Calling .GetMachineName
	I1101 00:13:13.098930   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHHostname
	I1101 00:13:13.101568   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:13:13.102027   30437 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:13:13.102065   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:13:13.102216   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHPort
	I1101 00:13:13.102421   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:13:13.102541   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:13:13.102676   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHUsername
	I1101 00:13:13.102799   30437 main.go:141] libmachine: Using SSH client type: native
	I1101 00:13:13.103126   30437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.130 22 <nil> <nil>}
	I1101 00:13:13.103141   30437 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-600483 && echo "multinode-600483" | sudo tee /etc/hostname
	I1101 00:13:31.564237   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:13:37.644255   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:13:40.716262   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:13:46.796293   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:13:49.868170   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:13:55.948250   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:13:59.020233   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:14:05.100199   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:14:08.172266   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:14:14.252264   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:14:17.324173   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:14:23.404211   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:14:26.476230   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:14:32.556235   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:14:35.628198   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:14:41.708300   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:14:44.780312   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:14:50.860274   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:14:53.932194   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:15:00.012267   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:15:03.084223   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:15:09.164248   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:15:12.236238   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:15:18.316178   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:15:21.388307   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:15:27.468207   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:15:30.540295   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:15:36.620214   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:15:39.692276   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:15:45.772246   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:15:48.844313   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:15:54.924200   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:15:57.996281   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:16:04.076213   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:16:07.148291   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:16:13.228189   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:16:16.300234   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:16:22.380273   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:16:25.452229   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:16:31.532246   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:16:34.604222   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:16:40.684220   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:16:43.756201   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:16:49.836270   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:16:52.908161   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:16:58.988244   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:17:02.060230   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:17:08.140132   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:17:11.212293   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:17:17.292228   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:17:20.364265   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:17:26.444256   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:17:29.516272   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:17:35.596191   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:17:38.668296   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:17:44.748225   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:17:47.820297   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:17:53.900237   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:17:56.972338   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:18:03.052312   30437 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.130:22: connect: no route to host
	I1101 00:18:06.054490   30437 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 00:18:06.054536   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHHostname
	I1101 00:18:06.056512   30437 machine.go:91] provisioned docker machine in 4m52.958197637s
	I1101 00:18:06.056547   30437 fix.go:56] fixHost completed within 4m52.980429673s
	I1101 00:18:06.056553   30437 start.go:83] releasing machines lock for "multinode-600483", held for 4m52.980451293s
	W1101 00:18:06.056569   30437 start.go:691] error starting host: provision: host is not running
	W1101 00:18:06.056650   30437 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1101 00:18:06.056659   30437 start.go:706] Will try again in 5 seconds ...
	I1101 00:18:11.057008   30437 start.go:365] acquiring machines lock for multinode-600483: {Name:mk7aad88408c319111b9be8e59d9593a9e88374b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 00:18:11.057105   30437 start.go:369] acquired machines lock for "multinode-600483" in 60.747µs
	I1101 00:18:11.057126   30437 start.go:96] Skipping create...Using existing machine configuration
	I1101 00:18:11.057134   30437 fix.go:54] fixHost starting: 
	I1101 00:18:11.057391   30437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1101 00:18:11.057412   30437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:18:11.071565   30437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42197
	I1101 00:18:11.071993   30437 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:18:11.072451   30437 main.go:141] libmachine: Using API Version  1
	I1101 00:18:11.072484   30437 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:18:11.072818   30437 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:18:11.073001   30437 main.go:141] libmachine: (multinode-600483) Calling .DriverName
	I1101 00:18:11.073184   30437 main.go:141] libmachine: (multinode-600483) Calling .GetState
	I1101 00:18:11.074967   30437 fix.go:102] recreateIfNeeded on multinode-600483: state=Stopped err=<nil>
	I1101 00:18:11.074989   30437 main.go:141] libmachine: (multinode-600483) Calling .DriverName
	W1101 00:18:11.075135   30437 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 00:18:11.077105   30437 out.go:177] * Restarting existing kvm2 VM for "multinode-600483" ...
	I1101 00:18:11.078509   30437 main.go:141] libmachine: (multinode-600483) Calling .Start
	I1101 00:18:11.078694   30437 main.go:141] libmachine: (multinode-600483) Ensuring networks are active...
	I1101 00:18:11.079546   30437 main.go:141] libmachine: (multinode-600483) Ensuring network default is active
	I1101 00:18:11.079911   30437 main.go:141] libmachine: (multinode-600483) Ensuring network mk-multinode-600483 is active
	I1101 00:18:11.080258   30437 main.go:141] libmachine: (multinode-600483) Getting domain xml...
	I1101 00:18:11.080907   30437 main.go:141] libmachine: (multinode-600483) Creating domain...
	I1101 00:18:12.317031   30437 main.go:141] libmachine: (multinode-600483) Waiting to get IP...
	I1101 00:18:12.317790   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:12.318142   30437 main.go:141] libmachine: (multinode-600483) DBG | unable to find current IP address of domain multinode-600483 in network mk-multinode-600483
	I1101 00:18:12.318231   30437 main.go:141] libmachine: (multinode-600483) DBG | I1101 00:18:12.318143   31676 retry.go:31] will retry after 251.138218ms: waiting for machine to come up
	I1101 00:18:12.570717   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:12.571206   30437 main.go:141] libmachine: (multinode-600483) DBG | unable to find current IP address of domain multinode-600483 in network mk-multinode-600483
	I1101 00:18:12.571234   30437 main.go:141] libmachine: (multinode-600483) DBG | I1101 00:18:12.571166   31676 retry.go:31] will retry after 312.948957ms: waiting for machine to come up
	I1101 00:18:12.885616   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:12.886029   30437 main.go:141] libmachine: (multinode-600483) DBG | unable to find current IP address of domain multinode-600483 in network mk-multinode-600483
	I1101 00:18:12.886075   30437 main.go:141] libmachine: (multinode-600483) DBG | I1101 00:18:12.886003   31676 retry.go:31] will retry after 371.751437ms: waiting for machine to come up
	I1101 00:18:13.259653   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:13.260018   30437 main.go:141] libmachine: (multinode-600483) DBG | unable to find current IP address of domain multinode-600483 in network mk-multinode-600483
	I1101 00:18:13.260043   30437 main.go:141] libmachine: (multinode-600483) DBG | I1101 00:18:13.259995   31676 retry.go:31] will retry after 396.558848ms: waiting for machine to come up
	I1101 00:18:13.658578   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:13.659008   30437 main.go:141] libmachine: (multinode-600483) DBG | unable to find current IP address of domain multinode-600483 in network mk-multinode-600483
	I1101 00:18:13.659033   30437 main.go:141] libmachine: (multinode-600483) DBG | I1101 00:18:13.658964   31676 retry.go:31] will retry after 719.463166ms: waiting for machine to come up
	I1101 00:18:14.379810   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:14.380229   30437 main.go:141] libmachine: (multinode-600483) DBG | unable to find current IP address of domain multinode-600483 in network mk-multinode-600483
	I1101 00:18:14.380261   30437 main.go:141] libmachine: (multinode-600483) DBG | I1101 00:18:14.380177   31676 retry.go:31] will retry after 814.138788ms: waiting for machine to come up
	I1101 00:18:15.195966   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:15.196628   30437 main.go:141] libmachine: (multinode-600483) DBG | unable to find current IP address of domain multinode-600483 in network mk-multinode-600483
	I1101 00:18:15.196654   30437 main.go:141] libmachine: (multinode-600483) DBG | I1101 00:18:15.196564   31676 retry.go:31] will retry after 1.171801986s: waiting for machine to come up
	I1101 00:18:16.369743   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:16.370101   30437 main.go:141] libmachine: (multinode-600483) DBG | unable to find current IP address of domain multinode-600483 in network mk-multinode-600483
	I1101 00:18:16.370138   30437 main.go:141] libmachine: (multinode-600483) DBG | I1101 00:18:16.370085   31676 retry.go:31] will retry after 1.355629663s: waiting for machine to come up
	I1101 00:18:17.727575   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:17.728008   30437 main.go:141] libmachine: (multinode-600483) DBG | unable to find current IP address of domain multinode-600483 in network mk-multinode-600483
	I1101 00:18:17.728034   30437 main.go:141] libmachine: (multinode-600483) DBG | I1101 00:18:17.727982   31676 retry.go:31] will retry after 1.400054948s: waiting for machine to come up
	I1101 00:18:19.130439   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:19.130842   30437 main.go:141] libmachine: (multinode-600483) DBG | unable to find current IP address of domain multinode-600483 in network mk-multinode-600483
	I1101 00:18:19.130866   30437 main.go:141] libmachine: (multinode-600483) DBG | I1101 00:18:19.130797   31676 retry.go:31] will retry after 2.231020261s: waiting for machine to come up
	I1101 00:18:21.363487   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:21.363899   30437 main.go:141] libmachine: (multinode-600483) DBG | unable to find current IP address of domain multinode-600483 in network mk-multinode-600483
	I1101 00:18:21.363942   30437 main.go:141] libmachine: (multinode-600483) DBG | I1101 00:18:21.363860   31676 retry.go:31] will retry after 2.790694317s: waiting for machine to come up
	I1101 00:18:24.157295   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:24.157718   30437 main.go:141] libmachine: (multinode-600483) DBG | unable to find current IP address of domain multinode-600483 in network mk-multinode-600483
	I1101 00:18:24.157744   30437 main.go:141] libmachine: (multinode-600483) DBG | I1101 00:18:24.157685   31676 retry.go:31] will retry after 3.333819589s: waiting for machine to come up
	I1101 00:18:27.493162   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:27.493663   30437 main.go:141] libmachine: (multinode-600483) DBG | unable to find current IP address of domain multinode-600483 in network mk-multinode-600483
	I1101 00:18:27.493682   30437 main.go:141] libmachine: (multinode-600483) DBG | I1101 00:18:27.493619   31676 retry.go:31] will retry after 3.182270949s: waiting for machine to come up
	I1101 00:18:30.680090   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:30.680629   30437 main.go:141] libmachine: (multinode-600483) Found IP for machine: 192.168.39.130
	I1101 00:18:30.680659   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has current primary IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:30.680669   30437 main.go:141] libmachine: (multinode-600483) Reserving static IP address...
	I1101 00:18:30.681129   30437 main.go:141] libmachine: (multinode-600483) Reserved static IP address: 192.168.39.130
	I1101 00:18:30.681163   30437 main.go:141] libmachine: (multinode-600483) Waiting for SSH to be available...
	I1101 00:18:30.681188   30437 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "multinode-600483", mac: "52:54:00:80:59:53", ip: "192.168.39.130"} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:18:30.681234   30437 main.go:141] libmachine: (multinode-600483) DBG | skip adding static IP to network mk-multinode-600483 - found existing host DHCP lease matching {name: "multinode-600483", mac: "52:54:00:80:59:53", ip: "192.168.39.130"}
	I1101 00:18:30.681253   30437 main.go:141] libmachine: (multinode-600483) DBG | Getting to WaitForSSH function...
	I1101 00:18:30.683289   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:30.683568   30437 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:18:30.683597   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:30.683710   30437 main.go:141] libmachine: (multinode-600483) DBG | Using SSH client type: external
	I1101 00:18:30.683741   30437 main.go:141] libmachine: (multinode-600483) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483/id_rsa (-rw-------)
	I1101 00:18:30.683793   30437 main.go:141] libmachine: (multinode-600483) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.130 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1101 00:18:30.683811   30437 main.go:141] libmachine: (multinode-600483) DBG | About to run SSH command:
	I1101 00:18:30.683826   30437 main.go:141] libmachine: (multinode-600483) DBG | exit 0
	I1101 00:18:30.772074   30437 main.go:141] libmachine: (multinode-600483) DBG | SSH cmd err, output: <nil>: 
	I1101 00:18:30.772397   30437 main.go:141] libmachine: (multinode-600483) Calling .GetConfigRaw
	I1101 00:18:30.773157   30437 main.go:141] libmachine: (multinode-600483) Calling .GetIP
	I1101 00:18:30.776124   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:30.776541   30437 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:18:30.776577   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:30.776789   30437 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/config.json ...
	I1101 00:18:30.777042   30437 machine.go:88] provisioning docker machine ...
	I1101 00:18:30.777066   30437 main.go:141] libmachine: (multinode-600483) Calling .DriverName
	I1101 00:18:30.777300   30437 main.go:141] libmachine: (multinode-600483) Calling .GetMachineName
	I1101 00:18:30.777491   30437 buildroot.go:166] provisioning hostname "multinode-600483"
	I1101 00:18:30.777509   30437 main.go:141] libmachine: (multinode-600483) Calling .GetMachineName
	I1101 00:18:30.777692   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHHostname
	I1101 00:18:30.780105   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:30.780432   30437 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:18:30.780481   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:30.780549   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHPort
	I1101 00:18:30.780714   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:18:30.780860   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:18:30.781010   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHUsername
	I1101 00:18:30.781169   30437 main.go:141] libmachine: Using SSH client type: native
	I1101 00:18:30.781619   30437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.130 22 <nil> <nil>}
	I1101 00:18:30.781641   30437 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-600483 && echo "multinode-600483" | sudo tee /etc/hostname
	I1101 00:18:30.913262   30437 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-600483
	
	I1101 00:18:30.913286   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHHostname
	I1101 00:18:30.916341   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:30.916672   30437 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:18:30.916705   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:30.916870   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHPort
	I1101 00:18:30.917081   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:18:30.917246   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:18:30.917389   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHUsername
	I1101 00:18:30.917563   30437 main.go:141] libmachine: Using SSH client type: native
	I1101 00:18:30.917867   30437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.130 22 <nil> <nil>}
	I1101 00:18:30.917905   30437 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-600483' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-600483/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-600483' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 00:18:31.044387   30437 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 00:18:31.044419   30437 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7305/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7305/.minikube}
	I1101 00:18:31.044447   30437 buildroot.go:174] setting up certificates
	I1101 00:18:31.044458   30437 provision.go:83] configureAuth start
	I1101 00:18:31.044472   30437 main.go:141] libmachine: (multinode-600483) Calling .GetMachineName
	I1101 00:18:31.044773   30437 main.go:141] libmachine: (multinode-600483) Calling .GetIP
	I1101 00:18:31.047887   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:31.048372   30437 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:18:31.048403   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:31.048604   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHHostname
	I1101 00:18:31.051027   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:31.051374   30437 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:18:31.051407   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:31.051543   30437 provision.go:138] copyHostCerts
	I1101 00:18:31.051572   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 00:18:31.051609   30437 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem, removing ...
	I1101 00:18:31.051621   30437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 00:18:31.051684   30437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem (1078 bytes)
	I1101 00:18:31.051773   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 00:18:31.051803   30437 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem, removing ...
	I1101 00:18:31.051809   30437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 00:18:31.051845   30437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem (1123 bytes)
	I1101 00:18:31.051919   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 00:18:31.051967   30437 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem, removing ...
	I1101 00:18:31.051977   30437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 00:18:31.052014   30437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem (1675 bytes)
	I1101 00:18:31.052099   30437 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem org=jenkins.multinode-600483 san=[192.168.39.130 192.168.39.130 localhost 127.0.0.1 minikube multinode-600483]
	I1101 00:18:31.256010   30437 provision.go:172] copyRemoteCerts
	I1101 00:18:31.256082   30437 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 00:18:31.256113   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHHostname
	I1101 00:18:31.259034   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:31.259389   30437 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:18:31.259422   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:31.259598   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHPort
	I1101 00:18:31.259811   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:18:31.260011   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHUsername
	I1101 00:18:31.260165   30437 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483/id_rsa Username:docker}
	I1101 00:18:31.348629   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1101 00:18:31.348695   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 00:18:31.372478   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1101 00:18:31.372554   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1101 00:18:31.396766   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1101 00:18:31.396835   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 00:18:31.421059   30437 provision.go:86] duration metric: configureAuth took 376.573495ms
	I1101 00:18:31.421100   30437 buildroot.go:189] setting minikube options for container-runtime
	I1101 00:18:31.421303   30437 config.go:182] Loaded profile config "multinode-600483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:18:31.421381   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHHostname
	I1101 00:18:31.424452   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:31.424975   30437 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:18:31.425006   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:31.425354   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHPort
	I1101 00:18:31.425627   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:18:31.425832   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:18:31.426051   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHUsername
	I1101 00:18:31.426265   30437 main.go:141] libmachine: Using SSH client type: native
	I1101 00:18:31.426647   30437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.130 22 <nil> <nil>}
	I1101 00:18:31.426667   30437 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 00:18:31.744849   30437 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 00:18:31.744899   30437 machine.go:91] provisioned docker machine in 967.840462ms
	I1101 00:18:31.744912   30437 start.go:300] post-start starting for "multinode-600483" (driver="kvm2")
	I1101 00:18:31.744924   30437 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 00:18:31.744945   30437 main.go:141] libmachine: (multinode-600483) Calling .DriverName
	I1101 00:18:31.745303   30437 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 00:18:31.745329   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHHostname
	I1101 00:18:31.747881   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:31.748257   30437 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:18:31.748293   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:31.748400   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHPort
	I1101 00:18:31.748610   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:18:31.748770   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHUsername
	I1101 00:18:31.748905   30437 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483/id_rsa Username:docker}
	I1101 00:18:31.838505   30437 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 00:18:31.842933   30437 command_runner.go:130] > NAME=Buildroot
	I1101 00:18:31.842963   30437 command_runner.go:130] > VERSION=2021.02.12-1-g0cee705-dirty
	I1101 00:18:31.842968   30437 command_runner.go:130] > ID=buildroot
	I1101 00:18:31.842976   30437 command_runner.go:130] > VERSION_ID=2021.02.12
	I1101 00:18:31.842986   30437 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1101 00:18:31.843199   30437 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 00:18:31.843224   30437 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/addons for local assets ...
	I1101 00:18:31.843313   30437 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/files for local assets ...
	I1101 00:18:31.843413   30437 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> 145042.pem in /etc/ssl/certs
	I1101 00:18:31.843426   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> /etc/ssl/certs/145042.pem
	I1101 00:18:31.843511   30437 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 00:18:31.853745   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /etc/ssl/certs/145042.pem (1708 bytes)
	I1101 00:18:31.875642   30437 start.go:303] post-start completed in 130.713468ms
	I1101 00:18:31.875670   30437 fix.go:56] fixHost completed within 20.818535341s
	I1101 00:18:31.875695   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHHostname
	I1101 00:18:31.878316   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:31.878882   30437 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:18:31.878917   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:31.879112   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHPort
	I1101 00:18:31.879319   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:18:31.879482   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:18:31.879646   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHUsername
	I1101 00:18:31.879803   30437 main.go:141] libmachine: Using SSH client type: native
	I1101 00:18:31.880218   30437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.130 22 <nil> <nil>}
	I1101 00:18:31.880233   30437 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1101 00:18:31.996880   30437 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698797911.949381796
	
	I1101 00:18:31.996905   30437 fix.go:206] guest clock: 1698797911.949381796
	I1101 00:18:31.996915   30437 fix.go:219] Guest: 2023-11-01 00:18:31.949381796 +0000 UTC Remote: 2023-11-01 00:18:31.875674091 +0000 UTC m=+318.947040819 (delta=73.707705ms)
	I1101 00:18:31.996940   30437 fix.go:190] guest clock delta is within tolerance: 73.707705ms
	I1101 00:18:31.996946   30437 start.go:83] releasing machines lock for "multinode-600483", held for 20.939830975s
	I1101 00:18:31.996980   30437 main.go:141] libmachine: (multinode-600483) Calling .DriverName
	I1101 00:18:31.997253   30437 main.go:141] libmachine: (multinode-600483) Calling .GetIP
	I1101 00:18:32.000091   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:32.000526   30437 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:18:32.000568   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:32.000744   30437 main.go:141] libmachine: (multinode-600483) Calling .DriverName
	I1101 00:18:32.001310   30437 main.go:141] libmachine: (multinode-600483) Calling .DriverName
	I1101 00:18:32.001555   30437 main.go:141] libmachine: (multinode-600483) Calling .DriverName
	I1101 00:18:32.001643   30437 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 00:18:32.001695   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHHostname
	I1101 00:18:32.001753   30437 ssh_runner.go:195] Run: cat /version.json
	I1101 00:18:32.001787   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHHostname
	I1101 00:18:32.004293   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:32.004624   30437 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:18:32.004663   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:32.004690   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:32.004778   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHPort
	I1101 00:18:32.004969   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:18:32.005046   30437 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:18:32.005074   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:32.005133   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHUsername
	I1101 00:18:32.005232   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHPort
	I1101 00:18:32.005304   30437 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483/id_rsa Username:docker}
	I1101 00:18:32.005358   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:18:32.005508   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHUsername
	I1101 00:18:32.005634   30437 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483/id_rsa Username:docker}
	I1101 00:18:32.126864   30437 command_runner.go:130] > {"iso_version": "v1.32.0-1698773592-17486", "kicbase_version": "v0.0.41-1698660445-17527", "minikube_version": "v1.32.0-beta.0", "commit": "01e1cff766666ed9b9dd97c2a32d71cdb94ff3cf"}
	I1101 00:18:32.126967   30437 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1101 00:18:32.127007   30437 ssh_runner.go:195] Run: systemctl --version
	I1101 00:18:32.132520   30437 command_runner.go:130] > systemd 247 (247)
	I1101 00:18:32.132558   30437 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1101 00:18:32.132758   30437 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 00:18:32.272554   30437 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1101 00:18:32.278527   30437 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1101 00:18:32.278832   30437 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 00:18:32.278904   30437 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 00:18:32.293601   30437 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1101 00:18:32.293681   30437 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 00:18:32.293690   30437 start.go:472] detecting cgroup driver to use...
	I1101 00:18:32.293765   30437 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 00:18:32.307448   30437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 00:18:32.319897   30437 docker.go:204] disabling cri-docker service (if available) ...
	I1101 00:18:32.319989   30437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 00:18:32.332446   30437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 00:18:32.345216   30437 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 00:18:32.449424   30437 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1101 00:18:32.449506   30437 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 00:18:32.463208   30437 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1101 00:18:32.570502   30437 docker.go:220] disabling docker service ...
	I1101 00:18:32.570568   30437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 00:18:32.583239   30437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 00:18:32.594020   30437 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1101 00:18:32.594798   30437 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 00:18:32.609250   30437 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1101 00:18:32.709738   30437 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 00:18:32.824963   30437 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1101 00:18:32.824990   30437 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1101 00:18:32.825047   30437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 00:18:32.837692   30437 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 00:18:32.854671   30437 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1101 00:18:32.854721   30437 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 00:18:32.854772   30437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:18:32.864009   30437 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 00:18:32.864078   30437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:18:32.872844   30437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:18:32.881949   30437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:18:32.891162   30437 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 00:18:32.901001   30437 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 00:18:32.909402   30437 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 00:18:32.909530   30437 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 00:18:32.909586   30437 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 00:18:32.923218   30437 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 00:18:32.932006   30437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:18:33.048684   30437 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 00:18:33.206602   30437 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 00:18:33.206680   30437 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 00:18:33.211411   30437 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1101 00:18:33.211446   30437 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1101 00:18:33.211460   30437 command_runner.go:130] > Device: 16h/22d	Inode: 776         Links: 1
	I1101 00:18:33.211473   30437 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1101 00:18:33.211481   30437 command_runner.go:130] > Access: 2023-11-01 00:18:33.146951890 +0000
	I1101 00:18:33.211494   30437 command_runner.go:130] > Modify: 2023-11-01 00:18:33.146951890 +0000
	I1101 00:18:33.211503   30437 command_runner.go:130] > Change: 2023-11-01 00:18:33.146951890 +0000
	I1101 00:18:33.211509   30437 command_runner.go:130] >  Birth: -
	I1101 00:18:33.211534   30437 start.go:540] Will wait 60s for crictl version
	I1101 00:18:33.211580   30437 ssh_runner.go:195] Run: which crictl
	I1101 00:18:33.215051   30437 command_runner.go:130] > /usr/bin/crictl
	I1101 00:18:33.215149   30437 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 00:18:33.255962   30437 command_runner.go:130] > Version:  0.1.0
	I1101 00:18:33.255989   30437 command_runner.go:130] > RuntimeName:  cri-o
	I1101 00:18:33.255996   30437 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1101 00:18:33.256004   30437 command_runner.go:130] > RuntimeApiVersion:  v1
	I1101 00:18:33.257356   30437 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1101 00:18:33.257424   30437 ssh_runner.go:195] Run: crio --version
	I1101 00:18:33.303252   30437 command_runner.go:130] > crio version 1.24.1
	I1101 00:18:33.303279   30437 command_runner.go:130] > Version:          1.24.1
	I1101 00:18:33.303291   30437 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1101 00:18:33.303299   30437 command_runner.go:130] > GitTreeState:     dirty
	I1101 00:18:33.303309   30437 command_runner.go:130] > BuildDate:        2023-10-31T22:57:11Z
	I1101 00:18:33.303317   30437 command_runner.go:130] > GoVersion:        go1.19.9
	I1101 00:18:33.303325   30437 command_runner.go:130] > Compiler:         gc
	I1101 00:18:33.303333   30437 command_runner.go:130] > Platform:         linux/amd64
	I1101 00:18:33.303341   30437 command_runner.go:130] > Linkmode:         dynamic
	I1101 00:18:33.303354   30437 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1101 00:18:33.303362   30437 command_runner.go:130] > SeccompEnabled:   true
	I1101 00:18:33.303373   30437 command_runner.go:130] > AppArmorEnabled:  false
	I1101 00:18:33.304730   30437 ssh_runner.go:195] Run: crio --version
	I1101 00:18:33.345494   30437 command_runner.go:130] > crio version 1.24.1
	I1101 00:18:33.345521   30437 command_runner.go:130] > Version:          1.24.1
	I1101 00:18:33.345531   30437 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1101 00:18:33.345537   30437 command_runner.go:130] > GitTreeState:     dirty
	I1101 00:18:33.345550   30437 command_runner.go:130] > BuildDate:        2023-10-31T22:57:11Z
	I1101 00:18:33.345558   30437 command_runner.go:130] > GoVersion:        go1.19.9
	I1101 00:18:33.345565   30437 command_runner.go:130] > Compiler:         gc
	I1101 00:18:33.345572   30437 command_runner.go:130] > Platform:         linux/amd64
	I1101 00:18:33.345579   30437 command_runner.go:130] > Linkmode:         dynamic
	I1101 00:18:33.345590   30437 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1101 00:18:33.345600   30437 command_runner.go:130] > SeccompEnabled:   true
	I1101 00:18:33.345604   30437 command_runner.go:130] > AppArmorEnabled:  false
	I1101 00:18:33.348709   30437 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1101 00:18:33.350313   30437 main.go:141] libmachine: (multinode-600483) Calling .GetIP
	I1101 00:18:33.352756   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:33.353153   30437 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:18:33.353196   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:18:33.353376   30437 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1101 00:18:33.357733   30437 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 00:18:33.370292   30437 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 00:18:33.370343   30437 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 00:18:33.406374   30437 command_runner.go:130] > {
	I1101 00:18:33.406397   30437 command_runner.go:130] >   "images": [
	I1101 00:18:33.406402   30437 command_runner.go:130] >     {
	I1101 00:18:33.406409   30437 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1101 00:18:33.406416   30437 command_runner.go:130] >       "repoTags": [
	I1101 00:18:33.406431   30437 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1101 00:18:33.406437   30437 command_runner.go:130] >       ],
	I1101 00:18:33.406445   30437 command_runner.go:130] >       "repoDigests": [
	I1101 00:18:33.406482   30437 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1101 00:18:33.406493   30437 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1101 00:18:33.406496   30437 command_runner.go:130] >       ],
	I1101 00:18:33.406503   30437 command_runner.go:130] >       "size": "750414",
	I1101 00:18:33.406511   30437 command_runner.go:130] >       "uid": {
	I1101 00:18:33.406519   30437 command_runner.go:130] >         "value": "65535"
	I1101 00:18:33.406530   30437 command_runner.go:130] >       },
	I1101 00:18:33.406540   30437 command_runner.go:130] >       "username": "",
	I1101 00:18:33.406558   30437 command_runner.go:130] >       "spec": null,
	I1101 00:18:33.406567   30437 command_runner.go:130] >       "pinned": false
	I1101 00:18:33.406574   30437 command_runner.go:130] >     }
	I1101 00:18:33.406578   30437 command_runner.go:130] >   ]
	I1101 00:18:33.406584   30437 command_runner.go:130] > }
	I1101 00:18:33.406709   30437 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1101 00:18:33.406773   30437 ssh_runner.go:195] Run: which lz4
	I1101 00:18:33.410244   30437 command_runner.go:130] > /usr/bin/lz4
	I1101 00:18:33.410336   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1101 00:18:33.410419   30437 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1101 00:18:33.414083   30437 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 00:18:33.414335   30437 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 00:18:33.414360   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1101 00:18:35.112353   30437 crio.go:444] Took 1.701966 seconds to copy over tarball
	I1101 00:18:35.112451   30437 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 00:18:38.036782   30437 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.924294864s)
	I1101 00:18:38.036810   30437 crio.go:451] Took 2.924434 seconds to extract the tarball
	I1101 00:18:38.036819   30437 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 00:18:38.076832   30437 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 00:18:38.124233   30437 command_runner.go:130] > {
	I1101 00:18:38.124256   30437 command_runner.go:130] >   "images": [
	I1101 00:18:38.124263   30437 command_runner.go:130] >     {
	I1101 00:18:38.124275   30437 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1101 00:18:38.124283   30437 command_runner.go:130] >       "repoTags": [
	I1101 00:18:38.124293   30437 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1101 00:18:38.124300   30437 command_runner.go:130] >       ],
	I1101 00:18:38.124307   30437 command_runner.go:130] >       "repoDigests": [
	I1101 00:18:38.124326   30437 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1101 00:18:38.124341   30437 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1101 00:18:38.124348   30437 command_runner.go:130] >       ],
	I1101 00:18:38.124359   30437 command_runner.go:130] >       "size": "65258016",
	I1101 00:18:38.124365   30437 command_runner.go:130] >       "uid": null,
	I1101 00:18:38.124375   30437 command_runner.go:130] >       "username": "",
	I1101 00:18:38.124388   30437 command_runner.go:130] >       "spec": null,
	I1101 00:18:38.124397   30437 command_runner.go:130] >       "pinned": false
	I1101 00:18:38.124427   30437 command_runner.go:130] >     },
	I1101 00:18:38.124441   30437 command_runner.go:130] >     {
	I1101 00:18:38.124454   30437 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1101 00:18:38.124461   30437 command_runner.go:130] >       "repoTags": [
	I1101 00:18:38.124476   30437 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1101 00:18:38.124486   30437 command_runner.go:130] >       ],
	I1101 00:18:38.124495   30437 command_runner.go:130] >       "repoDigests": [
	I1101 00:18:38.124507   30437 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1101 00:18:38.124522   30437 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1101 00:18:38.124531   30437 command_runner.go:130] >       ],
	I1101 00:18:38.124542   30437 command_runner.go:130] >       "size": "31470524",
	I1101 00:18:38.124551   30437 command_runner.go:130] >       "uid": null,
	I1101 00:18:38.124558   30437 command_runner.go:130] >       "username": "",
	I1101 00:18:38.124568   30437 command_runner.go:130] >       "spec": null,
	I1101 00:18:38.124576   30437 command_runner.go:130] >       "pinned": false
	I1101 00:18:38.124585   30437 command_runner.go:130] >     },
	I1101 00:18:38.124591   30437 command_runner.go:130] >     {
	I1101 00:18:38.124601   30437 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1101 00:18:38.124608   30437 command_runner.go:130] >       "repoTags": [
	I1101 00:18:38.124615   30437 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1101 00:18:38.124619   30437 command_runner.go:130] >       ],
	I1101 00:18:38.124626   30437 command_runner.go:130] >       "repoDigests": [
	I1101 00:18:38.124634   30437 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1101 00:18:38.124644   30437 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1101 00:18:38.124650   30437 command_runner.go:130] >       ],
	I1101 00:18:38.124655   30437 command_runner.go:130] >       "size": "53621675",
	I1101 00:18:38.124660   30437 command_runner.go:130] >       "uid": null,
	I1101 00:18:38.124664   30437 command_runner.go:130] >       "username": "",
	I1101 00:18:38.124671   30437 command_runner.go:130] >       "spec": null,
	I1101 00:18:38.124675   30437 command_runner.go:130] >       "pinned": false
	I1101 00:18:38.124679   30437 command_runner.go:130] >     },
	I1101 00:18:38.124682   30437 command_runner.go:130] >     {
	I1101 00:18:38.124689   30437 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1101 00:18:38.124695   30437 command_runner.go:130] >       "repoTags": [
	I1101 00:18:38.124701   30437 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1101 00:18:38.124709   30437 command_runner.go:130] >       ],
	I1101 00:18:38.124714   30437 command_runner.go:130] >       "repoDigests": [
	I1101 00:18:38.124724   30437 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1101 00:18:38.124731   30437 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1101 00:18:38.124742   30437 command_runner.go:130] >       ],
	I1101 00:18:38.124747   30437 command_runner.go:130] >       "size": "295456551",
	I1101 00:18:38.124753   30437 command_runner.go:130] >       "uid": {
	I1101 00:18:38.124758   30437 command_runner.go:130] >         "value": "0"
	I1101 00:18:38.124764   30437 command_runner.go:130] >       },
	I1101 00:18:38.124767   30437 command_runner.go:130] >       "username": "",
	I1101 00:18:38.124772   30437 command_runner.go:130] >       "spec": null,
	I1101 00:18:38.124779   30437 command_runner.go:130] >       "pinned": false
	I1101 00:18:38.124782   30437 command_runner.go:130] >     },
	I1101 00:18:38.124786   30437 command_runner.go:130] >     {
	I1101 00:18:38.124792   30437 command_runner.go:130] >       "id": "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076",
	I1101 00:18:38.124797   30437 command_runner.go:130] >       "repoTags": [
	I1101 00:18:38.124802   30437 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.3"
	I1101 00:18:38.124808   30437 command_runner.go:130] >       ],
	I1101 00:18:38.124812   30437 command_runner.go:130] >       "repoDigests": [
	I1101 00:18:38.124822   30437 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab",
	I1101 00:18:38.124829   30437 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"
	I1101 00:18:38.124835   30437 command_runner.go:130] >       ],
	I1101 00:18:38.124839   30437 command_runner.go:130] >       "size": "127165392",
	I1101 00:18:38.124846   30437 command_runner.go:130] >       "uid": {
	I1101 00:18:38.124850   30437 command_runner.go:130] >         "value": "0"
	I1101 00:18:38.124855   30437 command_runner.go:130] >       },
	I1101 00:18:38.124860   30437 command_runner.go:130] >       "username": "",
	I1101 00:18:38.124866   30437 command_runner.go:130] >       "spec": null,
	I1101 00:18:38.124870   30437 command_runner.go:130] >       "pinned": false
	I1101 00:18:38.124874   30437 command_runner.go:130] >     },
	I1101 00:18:38.124877   30437 command_runner.go:130] >     {
	I1101 00:18:38.124883   30437 command_runner.go:130] >       "id": "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3",
	I1101 00:18:38.124890   30437 command_runner.go:130] >       "repoTags": [
	I1101 00:18:38.124895   30437 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.3"
	I1101 00:18:38.124901   30437 command_runner.go:130] >       ],
	I1101 00:18:38.124906   30437 command_runner.go:130] >       "repoDigests": [
	I1101 00:18:38.124914   30437 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707",
	I1101 00:18:38.124923   30437 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d"
	I1101 00:18:38.124929   30437 command_runner.go:130] >       ],
	I1101 00:18:38.124933   30437 command_runner.go:130] >       "size": "123188534",
	I1101 00:18:38.124937   30437 command_runner.go:130] >       "uid": {
	I1101 00:18:38.124941   30437 command_runner.go:130] >         "value": "0"
	I1101 00:18:38.124945   30437 command_runner.go:130] >       },
	I1101 00:18:38.124949   30437 command_runner.go:130] >       "username": "",
	I1101 00:18:38.124953   30437 command_runner.go:130] >       "spec": null,
	I1101 00:18:38.124957   30437 command_runner.go:130] >       "pinned": false
	I1101 00:18:38.124961   30437 command_runner.go:130] >     },
	I1101 00:18:38.124964   30437 command_runner.go:130] >     {
	I1101 00:18:38.124970   30437 command_runner.go:130] >       "id": "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf",
	I1101 00:18:38.124979   30437 command_runner.go:130] >       "repoTags": [
	I1101 00:18:38.124984   30437 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.3"
	I1101 00:18:38.124990   30437 command_runner.go:130] >       ],
	I1101 00:18:38.124995   30437 command_runner.go:130] >       "repoDigests": [
	I1101 00:18:38.125001   30437 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8",
	I1101 00:18:38.125011   30437 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"
	I1101 00:18:38.125014   30437 command_runner.go:130] >       ],
	I1101 00:18:38.125019   30437 command_runner.go:130] >       "size": "74691991",
	I1101 00:18:38.125023   30437 command_runner.go:130] >       "uid": null,
	I1101 00:18:38.125027   30437 command_runner.go:130] >       "username": "",
	I1101 00:18:38.125032   30437 command_runner.go:130] >       "spec": null,
	I1101 00:18:38.125038   30437 command_runner.go:130] >       "pinned": false
	I1101 00:18:38.125042   30437 command_runner.go:130] >     },
	I1101 00:18:38.125046   30437 command_runner.go:130] >     {
	I1101 00:18:38.125054   30437 command_runner.go:130] >       "id": "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4",
	I1101 00:18:38.125058   30437 command_runner.go:130] >       "repoTags": [
	I1101 00:18:38.125070   30437 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.3"
	I1101 00:18:38.125076   30437 command_runner.go:130] >       ],
	I1101 00:18:38.125080   30437 command_runner.go:130] >       "repoDigests": [
	I1101 00:18:38.125105   30437 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725",
	I1101 00:18:38.125118   30437 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374"
	I1101 00:18:38.125124   30437 command_runner.go:130] >       ],
	I1101 00:18:38.125129   30437 command_runner.go:130] >       "size": "61498678",
	I1101 00:18:38.125135   30437 command_runner.go:130] >       "uid": {
	I1101 00:18:38.125139   30437 command_runner.go:130] >         "value": "0"
	I1101 00:18:38.125144   30437 command_runner.go:130] >       },
	I1101 00:18:38.125148   30437 command_runner.go:130] >       "username": "",
	I1101 00:18:38.125153   30437 command_runner.go:130] >       "spec": null,
	I1101 00:18:38.125158   30437 command_runner.go:130] >       "pinned": false
	I1101 00:18:38.125162   30437 command_runner.go:130] >     },
	I1101 00:18:38.125165   30437 command_runner.go:130] >     {
	I1101 00:18:38.125171   30437 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1101 00:18:38.125178   30437 command_runner.go:130] >       "repoTags": [
	I1101 00:18:38.125182   30437 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1101 00:18:38.125188   30437 command_runner.go:130] >       ],
	I1101 00:18:38.125192   30437 command_runner.go:130] >       "repoDigests": [
	I1101 00:18:38.125200   30437 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1101 00:18:38.125209   30437 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1101 00:18:38.125215   30437 command_runner.go:130] >       ],
	I1101 00:18:38.125222   30437 command_runner.go:130] >       "size": "750414",
	I1101 00:18:38.125225   30437 command_runner.go:130] >       "uid": {
	I1101 00:18:38.125230   30437 command_runner.go:130] >         "value": "65535"
	I1101 00:18:38.125236   30437 command_runner.go:130] >       },
	I1101 00:18:38.125240   30437 command_runner.go:130] >       "username": "",
	I1101 00:18:38.125244   30437 command_runner.go:130] >       "spec": null,
	I1101 00:18:38.125248   30437 command_runner.go:130] >       "pinned": false
	I1101 00:18:38.125254   30437 command_runner.go:130] >     }
	I1101 00:18:38.125257   30437 command_runner.go:130] >   ]
	I1101 00:18:38.125260   30437 command_runner.go:130] > }
	I1101 00:18:38.125396   30437 crio.go:496] all images are preloaded for cri-o runtime.
	I1101 00:18:38.125412   30437 cache_images.go:84] Images are preloaded, skipping loading
	I1101 00:18:38.125482   30437 ssh_runner.go:195] Run: crio config
	I1101 00:18:38.175156   30437 command_runner.go:130] ! time="2023-11-01 00:18:38.127260817Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1101 00:18:38.175197   30437 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1101 00:18:38.183583   30437 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1101 00:18:38.183609   30437 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1101 00:18:38.183620   30437 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1101 00:18:38.183637   30437 command_runner.go:130] > #
	I1101 00:18:38.183648   30437 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1101 00:18:38.183661   30437 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1101 00:18:38.183670   30437 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1101 00:18:38.183690   30437 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1101 00:18:38.183700   30437 command_runner.go:130] > # reload'.
	I1101 00:18:38.183710   30437 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1101 00:18:38.183724   30437 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1101 00:18:38.183739   30437 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1101 00:18:38.183753   30437 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1101 00:18:38.183762   30437 command_runner.go:130] > [crio]
	I1101 00:18:38.183773   30437 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1101 00:18:38.183782   30437 command_runner.go:130] > # containers images, in this directory.
	I1101 00:18:38.183788   30437 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1101 00:18:38.183801   30437 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1101 00:18:38.183809   30437 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1101 00:18:38.183815   30437 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1101 00:18:38.183824   30437 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1101 00:18:38.183844   30437 command_runner.go:130] > storage_driver = "overlay"
	I1101 00:18:38.183860   30437 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1101 00:18:38.183865   30437 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1101 00:18:38.183869   30437 command_runner.go:130] > storage_option = [
	I1101 00:18:38.183874   30437 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1101 00:18:38.183878   30437 command_runner.go:130] > ]
	I1101 00:18:38.183884   30437 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1101 00:18:38.183890   30437 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1101 00:18:38.183897   30437 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1101 00:18:38.183903   30437 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1101 00:18:38.183911   30437 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1101 00:18:38.183916   30437 command_runner.go:130] > # always happen on a node reboot
	I1101 00:18:38.183923   30437 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1101 00:18:38.183947   30437 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1101 00:18:38.183957   30437 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1101 00:18:38.183966   30437 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1101 00:18:38.183974   30437 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1101 00:18:38.183982   30437 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1101 00:18:38.183996   30437 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1101 00:18:38.184004   30437 command_runner.go:130] > # internal_wipe = true
	I1101 00:18:38.184009   30437 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1101 00:18:38.184018   30437 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1101 00:18:38.184024   30437 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1101 00:18:38.184030   30437 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1101 00:18:38.184036   30437 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1101 00:18:38.184040   30437 command_runner.go:130] > [crio.api]
	I1101 00:18:38.184052   30437 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1101 00:18:38.184059   30437 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1101 00:18:38.184064   30437 command_runner.go:130] > # IP address on which the stream server will listen.
	I1101 00:18:38.184068   30437 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1101 00:18:38.184076   30437 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1101 00:18:38.184083   30437 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1101 00:18:38.184087   30437 command_runner.go:130] > # stream_port = "0"
	I1101 00:18:38.184094   30437 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1101 00:18:38.184100   30437 command_runner.go:130] > # stream_enable_tls = false
	I1101 00:18:38.184106   30437 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1101 00:18:38.184115   30437 command_runner.go:130] > # stream_idle_timeout = ""
	I1101 00:18:38.184123   30437 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1101 00:18:38.184131   30437 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1101 00:18:38.184136   30437 command_runner.go:130] > # minutes.
	I1101 00:18:38.184141   30437 command_runner.go:130] > # stream_tls_cert = ""
	I1101 00:18:38.184147   30437 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1101 00:18:38.184155   30437 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1101 00:18:38.184159   30437 command_runner.go:130] > # stream_tls_key = ""
	I1101 00:18:38.184165   30437 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1101 00:18:38.184174   30437 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1101 00:18:38.184179   30437 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1101 00:18:38.184203   30437 command_runner.go:130] > # stream_tls_ca = ""
	I1101 00:18:38.184215   30437 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1101 00:18:38.184220   30437 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1101 00:18:38.184230   30437 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1101 00:18:38.184235   30437 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1101 00:18:38.184260   30437 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1101 00:18:38.184269   30437 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1101 00:18:38.184276   30437 command_runner.go:130] > [crio.runtime]
	I1101 00:18:38.184285   30437 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1101 00:18:38.184290   30437 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1101 00:18:38.184298   30437 command_runner.go:130] > # "nofile=1024:2048"
	I1101 00:18:38.184304   30437 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1101 00:18:38.184308   30437 command_runner.go:130] > # default_ulimits = [
	I1101 00:18:38.184312   30437 command_runner.go:130] > # ]
	I1101 00:18:38.184318   30437 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1101 00:18:38.184324   30437 command_runner.go:130] > # no_pivot = false
	I1101 00:18:38.184330   30437 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1101 00:18:38.184338   30437 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1101 00:18:38.184344   30437 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1101 00:18:38.184352   30437 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1101 00:18:38.184357   30437 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1101 00:18:38.184366   30437 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1101 00:18:38.184371   30437 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1101 00:18:38.184378   30437 command_runner.go:130] > # Cgroup setting for conmon
	I1101 00:18:38.184384   30437 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1101 00:18:38.184391   30437 command_runner.go:130] > conmon_cgroup = "pod"
	I1101 00:18:38.184397   30437 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1101 00:18:38.184405   30437 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1101 00:18:38.184411   30437 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1101 00:18:38.184417   30437 command_runner.go:130] > conmon_env = [
	I1101 00:18:38.184423   30437 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1101 00:18:38.184429   30437 command_runner.go:130] > ]
	I1101 00:18:38.184434   30437 command_runner.go:130] > # Additional environment variables to set for all the
	I1101 00:18:38.184440   30437 command_runner.go:130] > # containers. These are overridden if set in the
	I1101 00:18:38.184446   30437 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1101 00:18:38.184452   30437 command_runner.go:130] > # default_env = [
	I1101 00:18:38.184456   30437 command_runner.go:130] > # ]
	I1101 00:18:38.184463   30437 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1101 00:18:38.184467   30437 command_runner.go:130] > # selinux = false
	I1101 00:18:38.184475   30437 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1101 00:18:38.184483   30437 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1101 00:18:38.184489   30437 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1101 00:18:38.184496   30437 command_runner.go:130] > # seccomp_profile = ""
	I1101 00:18:38.184502   30437 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1101 00:18:38.184510   30437 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1101 00:18:38.184516   30437 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1101 00:18:38.184520   30437 command_runner.go:130] > # which might increase security.
	I1101 00:18:38.184527   30437 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1101 00:18:38.184533   30437 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1101 00:18:38.184542   30437 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1101 00:18:38.184554   30437 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1101 00:18:38.184561   30437 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1101 00:18:38.184568   30437 command_runner.go:130] > # This option supports live configuration reload.
	I1101 00:18:38.184573   30437 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1101 00:18:38.184583   30437 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1101 00:18:38.184588   30437 command_runner.go:130] > # the cgroup blockio controller.
	I1101 00:18:38.184595   30437 command_runner.go:130] > # blockio_config_file = ""
	I1101 00:18:38.184601   30437 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1101 00:18:38.184606   30437 command_runner.go:130] > # irqbalance daemon.
	I1101 00:18:38.184614   30437 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1101 00:18:38.184621   30437 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1101 00:18:38.184628   30437 command_runner.go:130] > # This option supports live configuration reload.
	I1101 00:18:38.184632   30437 command_runner.go:130] > # rdt_config_file = ""
	I1101 00:18:38.184638   30437 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1101 00:18:38.184645   30437 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1101 00:18:38.184650   30437 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1101 00:18:38.184658   30437 command_runner.go:130] > # separate_pull_cgroup = ""
	I1101 00:18:38.184664   30437 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1101 00:18:38.184672   30437 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1101 00:18:38.184677   30437 command_runner.go:130] > # will be added.
	I1101 00:18:38.184681   30437 command_runner.go:130] > # default_capabilities = [
	I1101 00:18:38.184685   30437 command_runner.go:130] > # 	"CHOWN",
	I1101 00:18:38.184691   30437 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1101 00:18:38.184695   30437 command_runner.go:130] > # 	"FSETID",
	I1101 00:18:38.184701   30437 command_runner.go:130] > # 	"FOWNER",
	I1101 00:18:38.184704   30437 command_runner.go:130] > # 	"SETGID",
	I1101 00:18:38.184708   30437 command_runner.go:130] > # 	"SETUID",
	I1101 00:18:38.184712   30437 command_runner.go:130] > # 	"SETPCAP",
	I1101 00:18:38.184717   30437 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1101 00:18:38.184722   30437 command_runner.go:130] > # 	"KILL",
	I1101 00:18:38.184726   30437 command_runner.go:130] > # ]
	I1101 00:18:38.184732   30437 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1101 00:18:38.184740   30437 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1101 00:18:38.184744   30437 command_runner.go:130] > # default_sysctls = [
	I1101 00:18:38.184748   30437 command_runner.go:130] > # ]
	I1101 00:18:38.184753   30437 command_runner.go:130] > # List of devices on the host that a
	I1101 00:18:38.184761   30437 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1101 00:18:38.184765   30437 command_runner.go:130] > # allowed_devices = [
	I1101 00:18:38.184769   30437 command_runner.go:130] > # 	"/dev/fuse",
	I1101 00:18:38.184775   30437 command_runner.go:130] > # ]
	I1101 00:18:38.184780   30437 command_runner.go:130] > # List of additional devices. specified as
	I1101 00:18:38.184789   30437 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1101 00:18:38.184795   30437 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1101 00:18:38.184812   30437 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1101 00:18:38.184818   30437 command_runner.go:130] > # additional_devices = [
	I1101 00:18:38.184822   30437 command_runner.go:130] > # ]
	I1101 00:18:38.184828   30437 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1101 00:18:38.184834   30437 command_runner.go:130] > # cdi_spec_dirs = [
	I1101 00:18:38.184838   30437 command_runner.go:130] > # 	"/etc/cdi",
	I1101 00:18:38.184842   30437 command_runner.go:130] > # 	"/var/run/cdi",
	I1101 00:18:38.184848   30437 command_runner.go:130] > # ]
	I1101 00:18:38.184854   30437 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1101 00:18:38.184860   30437 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1101 00:18:38.184866   30437 command_runner.go:130] > # Defaults to false.
	I1101 00:18:38.184871   30437 command_runner.go:130] > # device_ownership_from_security_context = false
	I1101 00:18:38.184880   30437 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1101 00:18:38.184886   30437 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1101 00:18:38.184893   30437 command_runner.go:130] > # hooks_dir = [
	I1101 00:18:38.184897   30437 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1101 00:18:38.184903   30437 command_runner.go:130] > # ]
	I1101 00:18:38.184909   30437 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1101 00:18:38.184919   30437 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1101 00:18:38.184927   30437 command_runner.go:130] > # its default mounts from the following two files:
	I1101 00:18:38.184931   30437 command_runner.go:130] > #
	I1101 00:18:38.184937   30437 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1101 00:18:38.184945   30437 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1101 00:18:38.184951   30437 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1101 00:18:38.184954   30437 command_runner.go:130] > #
	I1101 00:18:38.184960   30437 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1101 00:18:38.184969   30437 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1101 00:18:38.184975   30437 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1101 00:18:38.184983   30437 command_runner.go:130] > #      only add mounts it finds in this file.
	I1101 00:18:38.184986   30437 command_runner.go:130] > #
	I1101 00:18:38.184993   30437 command_runner.go:130] > # default_mounts_file = ""
	I1101 00:18:38.184998   30437 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1101 00:18:38.185007   30437 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1101 00:18:38.185011   30437 command_runner.go:130] > pids_limit = 1024
	I1101 00:18:38.185019   30437 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1101 00:18:38.185026   30437 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1101 00:18:38.185034   30437 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1101 00:18:38.185042   30437 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1101 00:18:38.185052   30437 command_runner.go:130] > # log_size_max = -1
	I1101 00:18:38.185058   30437 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1101 00:18:38.185065   30437 command_runner.go:130] > # log_to_journald = false
	I1101 00:18:38.185071   30437 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1101 00:18:38.185078   30437 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1101 00:18:38.185083   30437 command_runner.go:130] > # Path to directory for container attach sockets.
	I1101 00:18:38.185090   30437 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1101 00:18:38.185095   30437 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1101 00:18:38.185099   30437 command_runner.go:130] > # bind_mount_prefix = ""
	I1101 00:18:38.185104   30437 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1101 00:18:38.185109   30437 command_runner.go:130] > # read_only = false
	I1101 00:18:38.185115   30437 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1101 00:18:38.185123   30437 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1101 00:18:38.185128   30437 command_runner.go:130] > # live configuration reload.
	I1101 00:18:38.185134   30437 command_runner.go:130] > # log_level = "info"
	I1101 00:18:38.185140   30437 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1101 00:18:38.185147   30437 command_runner.go:130] > # This option supports live configuration reload.
	I1101 00:18:38.185154   30437 command_runner.go:130] > # log_filter = ""
	I1101 00:18:38.185162   30437 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1101 00:18:38.185168   30437 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1101 00:18:38.185176   30437 command_runner.go:130] > # separated by comma.
	I1101 00:18:38.185180   30437 command_runner.go:130] > # uid_mappings = ""
	I1101 00:18:38.185187   30437 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1101 00:18:38.185195   30437 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1101 00:18:38.185200   30437 command_runner.go:130] > # separated by comma.
	I1101 00:18:38.185206   30437 command_runner.go:130] > # gid_mappings = ""
	I1101 00:18:38.185212   30437 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1101 00:18:38.185221   30437 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1101 00:18:38.185227   30437 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1101 00:18:38.185234   30437 command_runner.go:130] > # minimum_mappable_uid = -1
	I1101 00:18:38.185239   30437 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1101 00:18:38.185248   30437 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1101 00:18:38.185254   30437 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1101 00:18:38.185261   30437 command_runner.go:130] > # minimum_mappable_gid = -1
	I1101 00:18:38.185266   30437 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1101 00:18:38.185272   30437 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1101 00:18:38.185280   30437 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1101 00:18:38.185285   30437 command_runner.go:130] > # ctr_stop_timeout = 30
	I1101 00:18:38.185293   30437 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1101 00:18:38.185298   30437 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1101 00:18:38.185306   30437 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1101 00:18:38.185311   30437 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1101 00:18:38.185319   30437 command_runner.go:130] > drop_infra_ctr = false
	I1101 00:18:38.185325   30437 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1101 00:18:38.185333   30437 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1101 00:18:38.185340   30437 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1101 00:18:38.185346   30437 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1101 00:18:38.185352   30437 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1101 00:18:38.185359   30437 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1101 00:18:38.185364   30437 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1101 00:18:38.185370   30437 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1101 00:18:38.185377   30437 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1101 00:18:38.185383   30437 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1101 00:18:38.185391   30437 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1101 00:18:38.185399   30437 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1101 00:18:38.185407   30437 command_runner.go:130] > # default_runtime = "runc"
	I1101 00:18:38.185412   30437 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1101 00:18:38.185422   30437 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1101 00:18:38.185433   30437 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1101 00:18:38.185441   30437 command_runner.go:130] > # creation as a file is not desired either.
	I1101 00:18:38.185449   30437 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1101 00:18:38.185456   30437 command_runner.go:130] > # the hostname is being managed dynamically.
	I1101 00:18:38.185461   30437 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1101 00:18:38.185468   30437 command_runner.go:130] > # ]
	I1101 00:18:38.185474   30437 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1101 00:18:38.185483   30437 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1101 00:18:38.185490   30437 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1101 00:18:38.185498   30437 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1101 00:18:38.185501   30437 command_runner.go:130] > #
	I1101 00:18:38.185506   30437 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1101 00:18:38.185513   30437 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1101 00:18:38.185518   30437 command_runner.go:130] > #  runtime_type = "oci"
	I1101 00:18:38.185523   30437 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1101 00:18:38.185528   30437 command_runner.go:130] > #  privileged_without_host_devices = false
	I1101 00:18:38.185532   30437 command_runner.go:130] > #  allowed_annotations = []
	I1101 00:18:38.185537   30437 command_runner.go:130] > # Where:
	I1101 00:18:38.185543   30437 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1101 00:18:38.185553   30437 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1101 00:18:38.185559   30437 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1101 00:18:38.185567   30437 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1101 00:18:38.185572   30437 command_runner.go:130] > #   in $PATH.
	I1101 00:18:38.185578   30437 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1101 00:18:38.185585   30437 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1101 00:18:38.185591   30437 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1101 00:18:38.185597   30437 command_runner.go:130] > #   state.
	I1101 00:18:38.185603   30437 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1101 00:18:38.185611   30437 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1101 00:18:38.185617   30437 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1101 00:18:38.185623   30437 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1101 00:18:38.185631   30437 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1101 00:18:38.185640   30437 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1101 00:18:38.185647   30437 command_runner.go:130] > #   The currently recognized values are:
	I1101 00:18:38.185654   30437 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1101 00:18:38.185663   30437 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1101 00:18:38.185669   30437 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1101 00:18:38.185678   30437 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1101 00:18:38.185688   30437 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1101 00:18:38.185697   30437 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1101 00:18:38.185703   30437 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1101 00:18:38.185710   30437 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1101 00:18:38.185717   30437 command_runner.go:130] > #   should be moved to the container's cgroup
	I1101 00:18:38.185721   30437 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1101 00:18:38.185726   30437 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1101 00:18:38.185730   30437 command_runner.go:130] > runtime_type = "oci"
	I1101 00:18:38.185735   30437 command_runner.go:130] > runtime_root = "/run/runc"
	I1101 00:18:38.185740   30437 command_runner.go:130] > runtime_config_path = ""
	I1101 00:18:38.185746   30437 command_runner.go:130] > monitor_path = ""
	I1101 00:18:38.185750   30437 command_runner.go:130] > monitor_cgroup = ""
	I1101 00:18:38.185757   30437 command_runner.go:130] > monitor_exec_cgroup = ""
	I1101 00:18:38.185763   30437 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1101 00:18:38.185769   30437 command_runner.go:130] > # running containers
	I1101 00:18:38.185773   30437 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1101 00:18:38.185782   30437 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1101 00:18:38.185818   30437 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1101 00:18:38.185829   30437 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1101 00:18:38.185834   30437 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1101 00:18:38.185839   30437 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1101 00:18:38.185844   30437 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1101 00:18:38.185850   30437 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1101 00:18:38.185857   30437 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1101 00:18:38.185862   30437 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1101 00:18:38.185870   30437 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1101 00:18:38.185876   30437 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1101 00:18:38.185884   30437 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1101 00:18:38.185892   30437 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1101 00:18:38.185900   30437 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1101 00:18:38.185908   30437 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1101 00:18:38.185919   30437 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1101 00:18:38.185929   30437 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1101 00:18:38.185938   30437 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1101 00:18:38.185945   30437 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1101 00:18:38.185951   30437 command_runner.go:130] > # Example:
	I1101 00:18:38.185956   30437 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1101 00:18:38.185963   30437 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1101 00:18:38.185969   30437 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1101 00:18:38.185976   30437 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1101 00:18:38.185980   30437 command_runner.go:130] > # cpuset = 0
	I1101 00:18:38.185987   30437 command_runner.go:130] > # cpushares = "0-1"
	I1101 00:18:38.185992   30437 command_runner.go:130] > # Where:
	I1101 00:18:38.185999   30437 command_runner.go:130] > # The workload name is workload-type.
	I1101 00:18:38.186006   30437 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1101 00:18:38.186014   30437 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1101 00:18:38.186020   30437 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1101 00:18:38.186029   30437 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1101 00:18:38.186037   30437 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1101 00:18:38.186041   30437 command_runner.go:130] > # 
	I1101 00:18:38.186051   30437 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1101 00:18:38.186057   30437 command_runner.go:130] > #
	I1101 00:18:38.186063   30437 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1101 00:18:38.186070   30437 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1101 00:18:38.186079   30437 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1101 00:18:38.186085   30437 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1101 00:18:38.186093   30437 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1101 00:18:38.186097   30437 command_runner.go:130] > [crio.image]
	I1101 00:18:38.186104   30437 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1101 00:18:38.186111   30437 command_runner.go:130] > # default_transport = "docker://"
	I1101 00:18:38.186118   30437 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1101 00:18:38.186127   30437 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1101 00:18:38.186131   30437 command_runner.go:130] > # global_auth_file = ""
	I1101 00:18:38.186136   30437 command_runner.go:130] > # The image used to instantiate infra containers.
	I1101 00:18:38.186142   30437 command_runner.go:130] > # This option supports live configuration reload.
	I1101 00:18:38.186147   30437 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1101 00:18:38.186157   30437 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1101 00:18:38.186165   30437 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1101 00:18:38.186171   30437 command_runner.go:130] > # This option supports live configuration reload.
	I1101 00:18:38.186178   30437 command_runner.go:130] > # pause_image_auth_file = ""
	I1101 00:18:38.186184   30437 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1101 00:18:38.186192   30437 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1101 00:18:38.186198   30437 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1101 00:18:38.186206   30437 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1101 00:18:38.186211   30437 command_runner.go:130] > # pause_command = "/pause"
	I1101 00:18:38.186216   30437 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1101 00:18:38.186225   30437 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1101 00:18:38.186231   30437 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1101 00:18:38.186237   30437 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1101 00:18:38.186242   30437 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1101 00:18:38.186246   30437 command_runner.go:130] > # signature_policy = ""
	I1101 00:18:38.186251   30437 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1101 00:18:38.186257   30437 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1101 00:18:38.186261   30437 command_runner.go:130] > # changing them here.
	I1101 00:18:38.186265   30437 command_runner.go:130] > # insecure_registries = [
	I1101 00:18:38.186268   30437 command_runner.go:130] > # ]
	I1101 00:18:38.186276   30437 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1101 00:18:38.186281   30437 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1101 00:18:38.186285   30437 command_runner.go:130] > # image_volumes = "mkdir"
	I1101 00:18:38.186290   30437 command_runner.go:130] > # Temporary directory to use for storing big files
	I1101 00:18:38.186294   30437 command_runner.go:130] > # big_files_temporary_dir = ""
	I1101 00:18:38.186299   30437 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1101 00:18:38.186303   30437 command_runner.go:130] > # CNI plugins.
	I1101 00:18:38.186307   30437 command_runner.go:130] > [crio.network]
	I1101 00:18:38.186312   30437 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1101 00:18:38.186317   30437 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1101 00:18:38.186321   30437 command_runner.go:130] > # cni_default_network = ""
	I1101 00:18:38.186327   30437 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1101 00:18:38.186338   30437 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1101 00:18:38.186344   30437 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1101 00:18:38.186350   30437 command_runner.go:130] > # plugin_dirs = [
	I1101 00:18:38.186354   30437 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1101 00:18:38.186357   30437 command_runner.go:130] > # ]
	I1101 00:18:38.186363   30437 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1101 00:18:38.186370   30437 command_runner.go:130] > [crio.metrics]
	I1101 00:18:38.186377   30437 command_runner.go:130] > # Globally enable or disable metrics support.
	I1101 00:18:38.186384   30437 command_runner.go:130] > enable_metrics = true
	I1101 00:18:38.186388   30437 command_runner.go:130] > # Specify enabled metrics collectors.
	I1101 00:18:38.186393   30437 command_runner.go:130] > # Per default all metrics are enabled.
	I1101 00:18:38.186401   30437 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1101 00:18:38.186407   30437 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1101 00:18:38.186415   30437 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1101 00:18:38.186419   30437 command_runner.go:130] > # metrics_collectors = [
	I1101 00:18:38.186426   30437 command_runner.go:130] > # 	"operations",
	I1101 00:18:38.186431   30437 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1101 00:18:38.186438   30437 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1101 00:18:38.186442   30437 command_runner.go:130] > # 	"operations_errors",
	I1101 00:18:38.186449   30437 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1101 00:18:38.186453   30437 command_runner.go:130] > # 	"image_pulls_by_name",
	I1101 00:18:38.186460   30437 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1101 00:18:38.186465   30437 command_runner.go:130] > # 	"image_pulls_failures",
	I1101 00:18:38.186470   30437 command_runner.go:130] > # 	"image_pulls_successes",
	I1101 00:18:38.186476   30437 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1101 00:18:38.186481   30437 command_runner.go:130] > # 	"image_layer_reuse",
	I1101 00:18:38.186485   30437 command_runner.go:130] > # 	"containers_oom_total",
	I1101 00:18:38.186492   30437 command_runner.go:130] > # 	"containers_oom",
	I1101 00:18:38.186496   30437 command_runner.go:130] > # 	"processes_defunct",
	I1101 00:18:38.186501   30437 command_runner.go:130] > # 	"operations_total",
	I1101 00:18:38.186505   30437 command_runner.go:130] > # 	"operations_latency_seconds",
	I1101 00:18:38.186511   30437 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1101 00:18:38.186515   30437 command_runner.go:130] > # 	"operations_errors_total",
	I1101 00:18:38.186522   30437 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1101 00:18:38.186527   30437 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1101 00:18:38.186533   30437 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1101 00:18:38.186538   30437 command_runner.go:130] > # 	"image_pulls_success_total",
	I1101 00:18:38.186546   30437 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1101 00:18:38.186551   30437 command_runner.go:130] > # 	"containers_oom_count_total",
	I1101 00:18:38.186557   30437 command_runner.go:130] > # ]
	I1101 00:18:38.186562   30437 command_runner.go:130] > # The port on which the metrics server will listen.
	I1101 00:18:38.186566   30437 command_runner.go:130] > # metrics_port = 9090
	I1101 00:18:38.186572   30437 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1101 00:18:38.186579   30437 command_runner.go:130] > # metrics_socket = ""
	I1101 00:18:38.186584   30437 command_runner.go:130] > # The certificate for the secure metrics server.
	I1101 00:18:38.186592   30437 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1101 00:18:38.186598   30437 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1101 00:18:38.186605   30437 command_runner.go:130] > # certificate on any modification event.
	I1101 00:18:38.186609   30437 command_runner.go:130] > # metrics_cert = ""
	I1101 00:18:38.186615   30437 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1101 00:18:38.186620   30437 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1101 00:18:38.186627   30437 command_runner.go:130] > # metrics_key = ""
	I1101 00:18:38.186634   30437 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1101 00:18:38.186640   30437 command_runner.go:130] > [crio.tracing]
	I1101 00:18:38.186645   30437 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1101 00:18:38.186650   30437 command_runner.go:130] > # enable_tracing = false
	I1101 00:18:38.186656   30437 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1101 00:18:38.186660   30437 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1101 00:18:38.186668   30437 command_runner.go:130] > # Number of samples to collect per million spans.
	I1101 00:18:38.186673   30437 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1101 00:18:38.186681   30437 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1101 00:18:38.186685   30437 command_runner.go:130] > [crio.stats]
	I1101 00:18:38.186693   30437 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1101 00:18:38.186698   30437 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1101 00:18:38.186704   30437 command_runner.go:130] > # stats_collection_period = 0
	I1101 00:18:38.186774   30437 cni.go:84] Creating CNI manager for ""
	I1101 00:18:38.186789   30437 cni.go:136] 3 nodes found, recommending kindnet
	I1101 00:18:38.186805   30437 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 00:18:38.186825   30437 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.130 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-600483 NodeName:multinode-600483 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.130"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.130 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 00:18:38.186935   30437 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.130
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-600483"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.130
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.130"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 00:18:38.186998   30437 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-600483 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.130
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-600483 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 00:18:38.187048   30437 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 00:18:38.195248   30437 command_runner.go:130] > kubeadm
	I1101 00:18:38.195267   30437 command_runner.go:130] > kubectl
	I1101 00:18:38.195274   30437 command_runner.go:130] > kubelet
	I1101 00:18:38.195309   30437 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 00:18:38.195369   30437 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 00:18:38.203889   30437 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I1101 00:18:38.218136   30437 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 00:18:38.233525   30437 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1101 00:18:38.251162   30437 ssh_runner.go:195] Run: grep 192.168.39.130	control-plane.minikube.internal$ /etc/hosts
	I1101 00:18:38.254930   30437 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.130	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 00:18:38.268077   30437 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483 for IP: 192.168.39.130
	I1101 00:18:38.268110   30437 certs.go:190] acquiring lock for shared ca certs: {Name:mk6185b3938c4b51e80f57b7c81926c81b632b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:18:38.268269   30437 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key
	I1101 00:18:38.268316   30437 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key
	I1101 00:18:38.268425   30437 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/client.key
	I1101 00:18:38.268509   30437 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/apiserver.key.3e334af8
	I1101 00:18:38.268570   30437 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/proxy-client.key
	I1101 00:18:38.268589   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1101 00:18:38.268611   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1101 00:18:38.268630   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1101 00:18:38.268646   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1101 00:18:38.268664   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1101 00:18:38.268682   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1101 00:18:38.268701   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1101 00:18:38.268719   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1101 00:18:38.268788   30437 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem (1338 bytes)
	W1101 00:18:38.268833   30437 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504_empty.pem, impossibly tiny 0 bytes
	I1101 00:18:38.268849   30437 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 00:18:38.268886   30437 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem (1078 bytes)
	I1101 00:18:38.268921   30437 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem (1123 bytes)
	I1101 00:18:38.268957   30437 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem (1675 bytes)
	I1101 00:18:38.269015   30437 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem (1708 bytes)
	I1101 00:18:38.269052   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> /usr/share/ca-certificates/145042.pem
	I1101 00:18:38.269074   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:18:38.269094   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem -> /usr/share/ca-certificates/14504.pem
	I1101 00:18:38.269682   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 00:18:38.295434   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 00:18:38.320867   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 00:18:38.345758   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 00:18:38.369290   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 00:18:38.392427   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 00:18:38.419968   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 00:18:38.444136   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 00:18:38.470374   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /usr/share/ca-certificates/145042.pem (1708 bytes)
	I1101 00:18:38.494462   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 00:18:38.517319   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem --> /usr/share/ca-certificates/14504.pem (1338 bytes)
	I1101 00:18:38.540577   30437 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 00:18:38.556725   30437 ssh_runner.go:195] Run: openssl version
	I1101 00:18:38.561946   30437 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1101 00:18:38.562100   30437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145042.pem && ln -fs /usr/share/ca-certificates/145042.pem /etc/ssl/certs/145042.pem"
	I1101 00:18:38.571529   30437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145042.pem
	I1101 00:18:38.576264   30437 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 31 23:54 /usr/share/ca-certificates/145042.pem
	I1101 00:18:38.576351   30437 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:54 /usr/share/ca-certificates/145042.pem
	I1101 00:18:38.576409   30437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145042.pem
	I1101 00:18:38.582176   30437 command_runner.go:130] > 3ec20f2e
	I1101 00:18:38.582302   30437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145042.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 00:18:38.593530   30437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 00:18:38.604127   30437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:18:38.609062   30437 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:18:38.609089   30437 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:18:38.609132   30437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:18:38.614798   30437 command_runner.go:130] > b5213941
	I1101 00:18:38.614908   30437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 00:18:38.625129   30437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14504.pem && ln -fs /usr/share/ca-certificates/14504.pem /etc/ssl/certs/14504.pem"
	I1101 00:18:38.634761   30437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14504.pem
	I1101 00:18:38.639128   30437 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 31 23:54 /usr/share/ca-certificates/14504.pem
	I1101 00:18:38.639274   30437 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:54 /usr/share/ca-certificates/14504.pem
	I1101 00:18:38.639339   30437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14504.pem
	I1101 00:18:38.644779   30437 command_runner.go:130] > 51391683
	I1101 00:18:38.644909   30437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14504.pem /etc/ssl/certs/51391683.0"
	I1101 00:18:38.654351   30437 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 00:18:38.658606   30437 command_runner.go:130] > ca.crt
	I1101 00:18:38.658623   30437 command_runner.go:130] > ca.key
	I1101 00:18:38.658630   30437 command_runner.go:130] > healthcheck-client.crt
	I1101 00:18:38.658637   30437 command_runner.go:130] > healthcheck-client.key
	I1101 00:18:38.658643   30437 command_runner.go:130] > peer.crt
	I1101 00:18:38.658649   30437 command_runner.go:130] > peer.key
	I1101 00:18:38.658654   30437 command_runner.go:130] > server.crt
	I1101 00:18:38.658660   30437 command_runner.go:130] > server.key
	I1101 00:18:38.658848   30437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 00:18:38.664408   30437 command_runner.go:130] > Certificate will not expire
	I1101 00:18:38.664833   30437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 00:18:38.670939   30437 command_runner.go:130] > Certificate will not expire
	I1101 00:18:38.671103   30437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 00:18:38.676927   30437 command_runner.go:130] > Certificate will not expire
	I1101 00:18:38.677002   30437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 00:18:38.682468   30437 command_runner.go:130] > Certificate will not expire
	I1101 00:18:38.682805   30437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 00:18:38.688511   30437 command_runner.go:130] > Certificate will not expire
	I1101 00:18:38.688579   30437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 00:18:38.694196   30437 command_runner.go:130] > Certificate will not expire
	I1101 00:18:38.694265   30437 kubeadm.go:404] StartCluster: {Name:multinode-600483 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.3 ClusterName:multinode-600483 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.130 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.109 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.2 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:18:38.694373   30437 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 00:18:38.694434   30437 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 00:18:38.730379   30437 cri.go:89] found id: ""
	I1101 00:18:38.730448   30437 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 00:18:38.739636   30437 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1101 00:18:38.739654   30437 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1101 00:18:38.739661   30437 command_runner.go:130] > /var/lib/minikube/etcd:
	I1101 00:18:38.739664   30437 command_runner.go:130] > member
	I1101 00:18:38.739684   30437 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1101 00:18:38.739710   30437 kubeadm.go:636] restartCluster start
	I1101 00:18:38.739761   30437 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 00:18:38.748917   30437 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:18:38.749401   30437 kubeconfig.go:92] found "multinode-600483" server: "https://192.168.39.130:8443"
	I1101 00:18:38.749874   30437 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 00:18:38.750110   30437 kapi.go:59] client config for multinode-600483: &rest.Config{Host:"https://192.168.39.130:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/client.crt", KeyFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/client.key", CAFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 00:18:38.750737   30437 cert_rotation.go:137] Starting client certificate rotation controller
	I1101 00:18:38.750922   30437 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 00:18:38.758948   30437 api_server.go:166] Checking apiserver status ...
	I1101 00:18:38.759015   30437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:18:38.769183   30437 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:18:38.769209   30437 api_server.go:166] Checking apiserver status ...
	I1101 00:18:38.769263   30437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:18:38.778953   30437 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:18:39.279673   30437 api_server.go:166] Checking apiserver status ...
	I1101 00:18:39.279781   30437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:18:39.291267   30437 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:18:39.779873   30437 api_server.go:166] Checking apiserver status ...
	I1101 00:18:39.780007   30437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:18:39.791480   30437 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:18:40.279029   30437 api_server.go:166] Checking apiserver status ...
	I1101 00:18:40.279106   30437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:18:40.290392   30437 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:18:40.779971   30437 api_server.go:166] Checking apiserver status ...
	I1101 00:18:40.780041   30437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:18:40.791298   30437 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:18:41.279395   30437 api_server.go:166] Checking apiserver status ...
	I1101 00:18:41.279466   30437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:18:41.291345   30437 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:18:41.780018   30437 api_server.go:166] Checking apiserver status ...
	I1101 00:18:41.780102   30437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:18:41.790751   30437 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:18:42.279284   30437 api_server.go:166] Checking apiserver status ...
	I1101 00:18:42.279412   30437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:18:42.291284   30437 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:18:42.779901   30437 api_server.go:166] Checking apiserver status ...
	I1101 00:18:42.780023   30437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:18:42.791786   30437 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:18:43.279171   30437 api_server.go:166] Checking apiserver status ...
	I1101 00:18:43.279244   30437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:18:43.290626   30437 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:18:43.779142   30437 api_server.go:166] Checking apiserver status ...
	I1101 00:18:43.779210   30437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:18:43.790992   30437 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:18:44.279549   30437 api_server.go:166] Checking apiserver status ...
	I1101 00:18:44.279650   30437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:18:44.291502   30437 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:18:44.779030   30437 api_server.go:166] Checking apiserver status ...
	I1101 00:18:44.779108   30437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:18:44.790212   30437 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:18:45.279792   30437 api_server.go:166] Checking apiserver status ...
	I1101 00:18:45.279874   30437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:18:45.290524   30437 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:18:45.779106   30437 api_server.go:166] Checking apiserver status ...
	I1101 00:18:45.779206   30437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:18:45.790050   30437 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:18:46.280086   30437 api_server.go:166] Checking apiserver status ...
	I1101 00:18:46.280157   30437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:18:46.291060   30437 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:18:46.779687   30437 api_server.go:166] Checking apiserver status ...
	I1101 00:18:46.779775   30437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:18:46.790363   30437 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:18:47.280021   30437 api_server.go:166] Checking apiserver status ...
	I1101 00:18:47.280133   30437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:18:47.291213   30437 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:18:47.779874   30437 api_server.go:166] Checking apiserver status ...
	I1101 00:18:47.779965   30437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:18:47.790187   30437 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:18:48.280032   30437 api_server.go:166] Checking apiserver status ...
	I1101 00:18:48.280117   30437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:18:48.291727   30437 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:18:48.759616   30437 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1101 00:18:48.759657   30437 kubeadm.go:1128] stopping kube-system containers ...
	I1101 00:18:48.759667   30437 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 00:18:48.759719   30437 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 00:18:48.800026   30437 cri.go:89] found id: ""
	I1101 00:18:48.800096   30437 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 00:18:48.816332   30437 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 00:18:48.825353   30437 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1101 00:18:48.825380   30437 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1101 00:18:48.825388   30437 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1101 00:18:48.825398   30437 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 00:18:48.825430   30437 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 00:18:48.825484   30437 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 00:18:48.834701   30437 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1101 00:18:48.834725   30437 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:18:48.948954   30437 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 00:18:48.949298   30437 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1101 00:18:48.949644   30437 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1101 00:18:48.950057   30437 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1101 00:18:48.950715   30437 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1101 00:18:48.951201   30437 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1101 00:18:48.952296   30437 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1101 00:18:48.952571   30437 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1101 00:18:48.953050   30437 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1101 00:18:48.953517   30437 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1101 00:18:48.953961   30437 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1101 00:18:48.954795   30437 command_runner.go:130] > [certs] Using the existing "sa" key
	I1101 00:18:48.956323   30437 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:18:49.005376   30437 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 00:18:49.270025   30437 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 00:18:49.532081   30437 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 00:18:49.662486   30437 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 00:18:49.843770   30437 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 00:18:49.846773   30437 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:18:50.035831   30437 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 00:18:50.035855   30437 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 00:18:50.035861   30437 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1101 00:18:50.035887   30437 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:18:50.109547   30437 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 00:18:50.109580   30437 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 00:18:50.109595   30437 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 00:18:50.109609   30437 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 00:18:50.109653   30437 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:18:50.181190   30437 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 00:18:50.181238   30437 api_server.go:52] waiting for apiserver process to appear ...
	I1101 00:18:50.181299   30437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:18:50.193882   30437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:18:50.713843   30437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:18:51.213222   30437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:18:51.713615   30437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:18:52.213559   30437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:18:52.245301   30437 command_runner.go:130] > 1075
	I1101 00:18:52.245608   30437 api_server.go:72] duration metric: took 2.06436543s to wait for apiserver process to appear ...
	I1101 00:18:52.245631   30437 api_server.go:88] waiting for apiserver healthz status ...
	I1101 00:18:52.245649   30437 api_server.go:253] Checking apiserver healthz at https://192.168.39.130:8443/healthz ...
	I1101 00:18:52.246112   30437 api_server.go:269] stopped: https://192.168.39.130:8443/healthz: Get "https://192.168.39.130:8443/healthz": dial tcp 192.168.39.130:8443: connect: connection refused
	I1101 00:18:52.246145   30437 api_server.go:253] Checking apiserver healthz at https://192.168.39.130:8443/healthz ...
	I1101 00:18:52.246526   30437 api_server.go:269] stopped: https://192.168.39.130:8443/healthz: Get "https://192.168.39.130:8443/healthz": dial tcp 192.168.39.130:8443: connect: connection refused
	I1101 00:18:52.746938   30437 api_server.go:253] Checking apiserver healthz at https://192.168.39.130:8443/healthz ...
	I1101 00:18:56.749572   30437 api_server.go:279] https://192.168.39.130:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 00:18:56.749604   30437 api_server.go:103] status: https://192.168.39.130:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 00:18:56.749615   30437 api_server.go:253] Checking apiserver healthz at https://192.168.39.130:8443/healthz ...
	I1101 00:18:56.860116   30437 api_server.go:279] https://192.168.39.130:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 00:18:56.860146   30437 api_server.go:103] status: https://192.168.39.130:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 00:18:57.247687   30437 api_server.go:253] Checking apiserver healthz at https://192.168.39.130:8443/healthz ...
	I1101 00:18:57.253990   30437 api_server.go:279] https://192.168.39.130:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 00:18:57.254025   30437 api_server.go:103] status: https://192.168.39.130:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 00:18:57.747647   30437 api_server.go:253] Checking apiserver healthz at https://192.168.39.130:8443/healthz ...
	I1101 00:18:57.753514   30437 api_server.go:279] https://192.168.39.130:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 00:18:57.753545   30437 api_server.go:103] status: https://192.168.39.130:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 00:18:58.247433   30437 api_server.go:253] Checking apiserver healthz at https://192.168.39.130:8443/healthz ...
	I1101 00:18:58.255875   30437 api_server.go:279] https://192.168.39.130:8443/healthz returned 200:
	ok
	I1101 00:18:58.256013   30437 round_trippers.go:463] GET https://192.168.39.130:8443/version
	I1101 00:18:58.256028   30437 round_trippers.go:469] Request Headers:
	I1101 00:18:58.256040   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:18:58.256071   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:18:58.281358   30437 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I1101 00:18:58.281387   30437 round_trippers.go:577] Response Headers:
	I1101 00:18:58.281399   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:18:58.281407   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:18:58.281419   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:18:58.281427   30437 round_trippers.go:580]     Content-Length: 264
	I1101 00:18:58.281435   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:18:58 GMT
	I1101 00:18:58.281444   30437 round_trippers.go:580]     Audit-Id: 8950b372-95c8-4911-b441-6b599e5d09b6
	I1101 00:18:58.281452   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:18:58.281494   30437 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1101 00:18:58.281603   30437 api_server.go:141] control plane version: v1.28.3
	I1101 00:18:58.281632   30437 api_server.go:131] duration metric: took 6.035995084s to wait for apiserver health ...
	I1101 00:18:58.281640   30437 cni.go:84] Creating CNI manager for ""
	I1101 00:18:58.281646   30437 cni.go:136] 3 nodes found, recommending kindnet
	I1101 00:18:58.283429   30437 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1101 00:18:58.284774   30437 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 00:18:58.290440   30437 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1101 00:18:58.290470   30437 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1101 00:18:58.290482   30437 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1101 00:18:58.290493   30437 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1101 00:18:58.290502   30437 command_runner.go:130] > Access: 2023-11-01 00:18:23.441951890 +0000
	I1101 00:18:58.290510   30437 command_runner.go:130] > Modify: 2023-10-31 23:04:20.000000000 +0000
	I1101 00:18:58.290516   30437 command_runner.go:130] > Change: 2023-11-01 00:18:21.588951890 +0000
	I1101 00:18:58.290520   30437 command_runner.go:130] >  Birth: -
	I1101 00:18:58.290565   30437 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1101 00:18:58.290576   30437 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1101 00:18:58.373965   30437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 00:18:59.670348   30437 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1101 00:18:59.670373   30437 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1101 00:18:59.670385   30437 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1101 00:18:59.670392   30437 command_runner.go:130] > daemonset.apps/kindnet configured
	I1101 00:18:59.670446   30437 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.296435069s)
	I1101 00:18:59.670470   30437 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 00:18:59.670550   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods
	I1101 00:18:59.670560   30437 round_trippers.go:469] Request Headers:
	I1101 00:18:59.670570   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:18:59.670581   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:18:59.675012   30437 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1101 00:18:59.675043   30437 round_trippers.go:577] Response Headers:
	I1101 00:18:59.675051   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:18:59.675056   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:18:59.675061   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:18:59.675066   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:18:59 GMT
	I1101 00:18:59.675084   30437 round_trippers.go:580]     Audit-Id: 5ee42c9a-687a-4b92-a566-89374d44517c
	I1101 00:18:59.675089   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:18:59.678155   30437 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"812"},"items":[{"metadata":{"name":"coredns-5dd5756b68-rpvvn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d8ab0ebb-aa1f-4143-b987-6c1ae065954a","resourceVersion":"746","creationTimestamp":"2023-11-01T00:08:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15779dee-f1e7-4836-aba2-2d57728c2309","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15779dee-f1e7-4836-aba2-2d57728c2309\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82627 chars]
	I1101 00:18:59.682281   30437 system_pods.go:59] 12 kube-system pods found
	I1101 00:18:59.682324   30437 system_pods.go:61] "coredns-5dd5756b68-rpvvn" [d8ab0ebb-aa1f-4143-b987-6c1ae065954a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 00:18:59.682333   30437 system_pods.go:61] "etcd-multinode-600483" [c612ebac-fa1d-474a-b8cd-5e922a5f76dd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 00:18:59.682338   30437 system_pods.go:61] "kindnet-d4f6q" [d5c9428a-a6ef-44a8-b3c8-f65e25e9d4a9] Running
	I1101 00:18:59.682344   30437 system_pods.go:61] "kindnet-l75r4" [abfa8ec3-0565-4927-a07c-9fed1240d270] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 00:18:59.682350   30437 system_pods.go:61] "kindnet-ldrkn" [3d2ad5a0-69f9-4bd2-8bd8-503b7f7602a9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 00:18:59.682356   30437 system_pods.go:61] "kube-apiserver-multinode-600483" [bd94a63a-62c2-4654-aaf0-2e9df086b168] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 00:18:59.682366   30437 system_pods.go:61] "kube-controller-manager-multinode-600483" [9dd41877-c6ea-4591-90e1-632a234ffcf6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 00:18:59.682375   30437 system_pods.go:61] "kube-proxy-7kvtf" [e2101b7f-e517-4100-905d-f46517e68255] Running
	I1101 00:18:59.682380   30437 system_pods.go:61] "kube-proxy-84g2n" [a98efae3-9303-43be-a139-d21a5630c6b8] Running
	I1101 00:18:59.682387   30437 system_pods.go:61] "kube-proxy-tq28b" [9534d8b8-4536-4a0a-8af5-440e6871a85f] Running
	I1101 00:18:59.682393   30437 system_pods.go:61] "kube-scheduler-multinode-600483" [9cdd0be5-035a-49f5-8796-831ebde28bf0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 00:18:59.682397   30437 system_pods.go:61] "storage-provisioner" [a67f136b-7645-4eb9-9568-52e3ab06d66e] Running
	I1101 00:18:59.682403   30437 system_pods.go:74] duration metric: took 11.925319ms to wait for pod list to return data ...
	I1101 00:18:59.682412   30437 node_conditions.go:102] verifying NodePressure condition ...
	I1101 00:18:59.682467   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes
	I1101 00:18:59.682474   30437 round_trippers.go:469] Request Headers:
	I1101 00:18:59.682481   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:18:59.682487   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:18:59.688603   30437 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1101 00:18:59.688633   30437 round_trippers.go:577] Response Headers:
	I1101 00:18:59.688641   30437 round_trippers.go:580]     Audit-Id: c595e2ef-a642-4f6b-b1ff-9a4517b7b1a9
	I1101 00:18:59.688647   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:18:59.688652   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:18:59.688658   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:18:59.688663   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:18:59.688668   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:18:59 GMT
	I1101 00:18:59.688891   30437 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"812"},"items":[{"metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"709","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"manage
dFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1"," [truncated 15257 chars]
	I1101 00:18:59.690044   30437 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 00:18:59.690074   30437 node_conditions.go:123] node cpu capacity is 2
	I1101 00:18:59.690086   30437 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 00:18:59.690093   30437 node_conditions.go:123] node cpu capacity is 2
	I1101 00:18:59.690100   30437 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 00:18:59.690107   30437 node_conditions.go:123] node cpu capacity is 2
	I1101 00:18:59.690112   30437 node_conditions.go:105] duration metric: took 7.695356ms to run NodePressure ...
	I1101 00:18:59.690138   30437 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:18:59.924274   30437 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1101 00:18:59.924297   30437 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1101 00:18:59.924318   30437 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1101 00:18:59.924457   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I1101 00:18:59.924489   30437 round_trippers.go:469] Request Headers:
	I1101 00:18:59.924500   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:18:59.924512   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:18:59.927779   30437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:18:59.927800   30437 round_trippers.go:577] Response Headers:
	I1101 00:18:59.927810   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:18:59.927818   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:18:59.927825   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:18:59 GMT
	I1101 00:18:59.927832   30437 round_trippers.go:580]     Audit-Id: 4624eb90-0dea-432f-b347-5c687856a5e6
	I1101 00:18:59.927839   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:18:59.927852   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:18:59.928362   30437 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"814"},"items":[{"metadata":{"name":"etcd-multinode-600483","namespace":"kube-system","uid":"c612ebac-fa1d-474a-b8cd-5e922a5f76dd","resourceVersion":"749","creationTimestamp":"2023-11-01T00:08:30Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.130:2379","kubernetes.io/config.hash":"5629fb0a0414e85632f97c416152ffbb","kubernetes.io/config.mirror":"5629fb0a0414e85632f97c416152ffbb","kubernetes.io/config.seen":"2023-11-01T00:08:30.293496672Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 28886 chars]
	I1101 00:18:59.929282   30437 kubeadm.go:787] kubelet initialised
	I1101 00:18:59.929301   30437 kubeadm.go:788] duration metric: took 4.972824ms waiting for restarted kubelet to initialise ...
	I1101 00:18:59.929310   30437 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 00:18:59.929372   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods
	I1101 00:18:59.929382   30437 round_trippers.go:469] Request Headers:
	I1101 00:18:59.929393   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:18:59.929404   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:18:59.932914   30437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:18:59.932931   30437 round_trippers.go:577] Response Headers:
	I1101 00:18:59.932938   30437 round_trippers.go:580]     Audit-Id: 730186af-30df-4695-89ad-cafe3c159f6b
	I1101 00:18:59.932943   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:18:59.932948   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:18:59.932956   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:18:59.932961   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:18:59.932966   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:18:59 GMT
	I1101 00:18:59.934558   30437 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"814"},"items":[{"metadata":{"name":"coredns-5dd5756b68-rpvvn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d8ab0ebb-aa1f-4143-b987-6c1ae065954a","resourceVersion":"746","creationTimestamp":"2023-11-01T00:08:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15779dee-f1e7-4836-aba2-2d57728c2309","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15779dee-f1e7-4836-aba2-2d57728c2309\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82627 chars]
	I1101 00:18:59.937184   30437 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-rpvvn" in "kube-system" namespace to be "Ready" ...
	I1101 00:18:59.937262   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rpvvn
	I1101 00:18:59.937272   30437 round_trippers.go:469] Request Headers:
	I1101 00:18:59.937279   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:18:59.937285   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:18:59.939875   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:18:59.939890   30437 round_trippers.go:577] Response Headers:
	I1101 00:18:59.939896   30437 round_trippers.go:580]     Audit-Id: 73aaa09b-4d82-4441-a42a-7e4820a4e564
	I1101 00:18:59.939901   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:18:59.939952   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:18:59.939968   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:18:59.939975   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:18:59.939980   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:18:59 GMT
	I1101 00:18:59.940105   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rpvvn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d8ab0ebb-aa1f-4143-b987-6c1ae065954a","resourceVersion":"746","creationTimestamp":"2023-11-01T00:08:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15779dee-f1e7-4836-aba2-2d57728c2309","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15779dee-f1e7-4836-aba2-2d57728c2309\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1101 00:18:59.940493   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:18:59.940505   30437 round_trippers.go:469] Request Headers:
	I1101 00:18:59.940512   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:18:59.940517   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:18:59.942408   30437 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1101 00:18:59.942421   30437 round_trippers.go:577] Response Headers:
	I1101 00:18:59.942427   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:18:59.942435   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:18:59.942441   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:18:59.942447   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:18:59 GMT
	I1101 00:18:59.942455   30437 round_trippers.go:580]     Audit-Id: 1f18b0d9-46d3-43c1-8e54-3fba23b80874
	I1101 00:18:59.942471   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:18:59.942659   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"709","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 6125 chars]
	I1101 00:18:59.943009   30437 pod_ready.go:97] node "multinode-600483" hosting pod "coredns-5dd5756b68-rpvvn" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-600483" has status "Ready":"False"
	I1101 00:18:59.943032   30437 pod_ready.go:81] duration metric: took 5.827074ms waiting for pod "coredns-5dd5756b68-rpvvn" in "kube-system" namespace to be "Ready" ...
	E1101 00:18:59.943040   30437 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-600483" hosting pod "coredns-5dd5756b68-rpvvn" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-600483" has status "Ready":"False"
	I1101 00:18:59.943047   30437 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:18:59.943096   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-600483
	I1101 00:18:59.943104   30437 round_trippers.go:469] Request Headers:
	I1101 00:18:59.943116   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:18:59.943122   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:18:59.945173   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:18:59.945188   30437 round_trippers.go:577] Response Headers:
	I1101 00:18:59.945194   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:18:59.945199   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:18:59.945215   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:18:59 GMT
	I1101 00:18:59.945223   30437 round_trippers.go:580]     Audit-Id: 23a035b4-e82c-4129-825d-3611f37ab340
	I1101 00:18:59.945231   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:18:59.945238   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:18:59.945380   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-600483","namespace":"kube-system","uid":"c612ebac-fa1d-474a-b8cd-5e922a5f76dd","resourceVersion":"749","creationTimestamp":"2023-11-01T00:08:30Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.130:2379","kubernetes.io/config.hash":"5629fb0a0414e85632f97c416152ffbb","kubernetes.io/config.mirror":"5629fb0a0414e85632f97c416152ffbb","kubernetes.io/config.seen":"2023-11-01T00:08:30.293496672Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I1101 00:18:59.945787   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:18:59.945800   30437 round_trippers.go:469] Request Headers:
	I1101 00:18:59.945807   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:18:59.945814   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:18:59.947752   30437 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1101 00:18:59.947764   30437 round_trippers.go:577] Response Headers:
	I1101 00:18:59.947770   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:18:59.947775   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:18:59.947781   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:18:59.947789   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:18:59.947798   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:18:59 GMT
	I1101 00:18:59.947807   30437 round_trippers.go:580]     Audit-Id: 312a7ac8-f3e6-4dd6-af7e-8df67fab135d
	I1101 00:18:59.948030   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"709","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 6125 chars]
	I1101 00:18:59.948374   30437 pod_ready.go:97] node "multinode-600483" hosting pod "etcd-multinode-600483" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-600483" has status "Ready":"False"
	I1101 00:18:59.948391   30437 pod_ready.go:81] duration metric: took 5.339261ms waiting for pod "etcd-multinode-600483" in "kube-system" namespace to be "Ready" ...
	E1101 00:18:59.948399   30437 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-600483" hosting pod "etcd-multinode-600483" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-600483" has status "Ready":"False"
	I1101 00:18:59.948412   30437 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:18:59.948458   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-600483
	I1101 00:18:59.948466   30437 round_trippers.go:469] Request Headers:
	I1101 00:18:59.948473   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:18:59.948479   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:18:59.955028   30437 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1101 00:18:59.955052   30437 round_trippers.go:577] Response Headers:
	I1101 00:18:59.955061   30437 round_trippers.go:580]     Audit-Id: 345ead19-0edc-43bf-b786-6c05b61a3117
	I1101 00:18:59.955067   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:18:59.955073   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:18:59.955078   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:18:59.955083   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:18:59.955088   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:18:59 GMT
	I1101 00:18:59.955330   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-600483","namespace":"kube-system","uid":"bd94a63a-62c2-4654-aaf0-2e9df086b168","resourceVersion":"750","creationTimestamp":"2023-11-01T00:08:30Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.130:8443","kubernetes.io/config.hash":"99a9cda13526c350638742a7c7b2ba52","kubernetes.io/config.mirror":"99a9cda13526c350638742a7c7b2ba52","kubernetes.io/config.seen":"2023-11-01T00:08:30.293497612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I1101 00:18:59.955801   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:18:59.955816   30437 round_trippers.go:469] Request Headers:
	I1101 00:18:59.955823   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:18:59.955829   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:18:59.958099   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:18:59.958121   30437 round_trippers.go:577] Response Headers:
	I1101 00:18:59.958131   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:18:59.958139   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:18:59.958148   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:18:59.958159   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:18:59.958167   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:18:59 GMT
	I1101 00:18:59.958179   30437 round_trippers.go:580]     Audit-Id: aac5d8cc-48f1-48c1-bc0a-2f8320c709fc
	I1101 00:18:59.958311   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"709","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 6125 chars]
	I1101 00:18:59.958690   30437 pod_ready.go:97] node "multinode-600483" hosting pod "kube-apiserver-multinode-600483" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-600483" has status "Ready":"False"
	I1101 00:18:59.958710   30437 pod_ready.go:81] duration metric: took 10.290192ms waiting for pod "kube-apiserver-multinode-600483" in "kube-system" namespace to be "Ready" ...
	E1101 00:18:59.958719   30437 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-600483" hosting pod "kube-apiserver-multinode-600483" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-600483" has status "Ready":"False"
	I1101 00:18:59.958734   30437 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:18:59.958779   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-600483
	I1101 00:18:59.958787   30437 round_trippers.go:469] Request Headers:
	I1101 00:18:59.958793   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:18:59.958799   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:18:59.963259   30437 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1101 00:18:59.963282   30437 round_trippers.go:577] Response Headers:
	I1101 00:18:59.963292   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:18:59.963300   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:18:59 GMT
	I1101 00:18:59.963308   30437 round_trippers.go:580]     Audit-Id: 8cfec538-94f6-4008-8b55-57d01f1902a3
	I1101 00:18:59.963319   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:18:59.963331   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:18:59.963339   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:18:59.963505   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-600483","namespace":"kube-system","uid":"9dd41877-c6ea-4591-90e1-632a234ffcf6","resourceVersion":"751","creationTimestamp":"2023-11-01T00:08:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f2b1fcba8b34b1f65e600fae0bd4374a","kubernetes.io/config.mirror":"f2b1fcba8b34b1f65e600fae0bd4374a","kubernetes.io/config.seen":"2023-11-01T00:08:20.448799328Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I1101 00:19:00.071263   30437 request.go:629] Waited for 107.275858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:19:00.071318   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:19:00.071327   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:00.071335   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:00.071341   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:00.074202   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:19:00.074222   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:00.074231   30437 round_trippers.go:580]     Audit-Id: 78255c31-f7f9-4cc8-b9b8-8cae7e2db89b
	I1101 00:19:00.074239   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:00.074247   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:00.074254   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:00.074262   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:00.074269   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:00 GMT
	I1101 00:19:00.074401   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"709","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 6125 chars]
	I1101 00:19:00.074718   30437 pod_ready.go:97] node "multinode-600483" hosting pod "kube-controller-manager-multinode-600483" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-600483" has status "Ready":"False"
	I1101 00:19:00.074737   30437 pod_ready.go:81] duration metric: took 115.996448ms waiting for pod "kube-controller-manager-multinode-600483" in "kube-system" namespace to be "Ready" ...
	E1101 00:19:00.074745   30437 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-600483" hosting pod "kube-controller-manager-multinode-600483" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-600483" has status "Ready":"False"
	I1101 00:19:00.074752   30437 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7kvtf" in "kube-system" namespace to be "Ready" ...
	I1101 00:19:00.271194   30437 request.go:629] Waited for 196.384776ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7kvtf
	I1101 00:19:00.271271   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7kvtf
	I1101 00:19:00.271278   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:00.271288   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:00.271305   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:00.275054   30437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:19:00.275085   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:00.275093   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:00.275099   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:00.275105   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:00.275116   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:00.275121   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:00 GMT
	I1101 00:19:00.275126   30437 round_trippers.go:580]     Audit-Id: d6e59e8c-5780-4135-92ed-31ba0bc9d791
	I1101 00:19:00.275367   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7kvtf","generateName":"kube-proxy-","namespace":"kube-system","uid":"e2101b7f-e517-4100-905d-f46517e68255","resourceVersion":"469","creationTimestamp":"2023-11-01T00:09:23Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2d674cb3-a003-4ca9-a8b5-a283ae64b7c6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d674cb3-a003-4ca9-a8b5-a283ae64b7c6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5525 chars]
	I1101 00:19:00.471227   30437 request.go:629] Waited for 195.429421ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m02
	I1101 00:19:00.471325   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m02
	I1101 00:19:00.471332   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:00.471343   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:00.471358   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:00.474177   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:19:00.474198   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:00.474209   30437 round_trippers.go:580]     Audit-Id: e899210e-29a6-407b-9078-853757113b65
	I1101 00:19:00.474217   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:00.474225   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:00.474234   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:00.474242   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:00.474251   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:00 GMT
	I1101 00:19:00.474375   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483-m02","uid":"5b2b1f13-2a35-43d5-86a5-bb5c1d6395e1","resourceVersion":"700","creationTimestamp":"2023-11-01T00:09:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 3684 chars]
	I1101 00:19:00.474657   30437 pod_ready.go:92] pod "kube-proxy-7kvtf" in "kube-system" namespace has status "Ready":"True"
	I1101 00:19:00.474677   30437 pod_ready.go:81] duration metric: took 399.917463ms waiting for pod "kube-proxy-7kvtf" in "kube-system" namespace to be "Ready" ...
	I1101 00:19:00.474691   30437 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-84g2n" in "kube-system" namespace to be "Ready" ...
	I1101 00:19:00.671149   30437 request.go:629] Waited for 196.394664ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-proxy-84g2n
	I1101 00:19:00.671204   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-proxy-84g2n
	I1101 00:19:00.671210   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:00.671221   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:00.671227   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:00.674197   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:19:00.674230   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:00.674237   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:00.674243   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:00 GMT
	I1101 00:19:00.674248   30437 round_trippers.go:580]     Audit-Id: 2639f9a6-5dfa-4cef-a7ff-b2ee26cf450e
	I1101 00:19:00.674254   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:00.674259   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:00.674267   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:00.674589   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-84g2n","generateName":"kube-proxy-","namespace":"kube-system","uid":"a98efae3-9303-43be-a139-d21a5630c6b8","resourceVersion":"680","creationTimestamp":"2023-11-01T00:10:15Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2d674cb3-a003-4ca9-a8b5-a283ae64b7c6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:10:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d674cb3-a003-4ca9-a8b5-a283ae64b7c6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5517 chars]
	I1101 00:19:00.871468   30437 request.go:629] Waited for 196.425367ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m03
	I1101 00:19:00.871551   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m03
	I1101 00:19:00.871561   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:00.871571   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:00.871579   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:00.874182   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:19:00.874215   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:00.874225   30437 round_trippers.go:580]     Audit-Id: 1c717225-314e-44aa-8673-94a8a97c347e
	I1101 00:19:00.874233   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:00.874245   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:00.874253   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:00.874261   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:00.874273   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:00 GMT
	I1101 00:19:00.874398   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483-m03","uid":"5050dc91-014d-4a1c-b839-f60403866911","resourceVersion":"707","creationTimestamp":"2023-11-01T00:10:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3411 chars]
	I1101 00:19:00.874704   30437 pod_ready.go:92] pod "kube-proxy-84g2n" in "kube-system" namespace has status "Ready":"True"
	I1101 00:19:00.874726   30437 pod_ready.go:81] duration metric: took 400.023021ms waiting for pod "kube-proxy-84g2n" in "kube-system" namespace to be "Ready" ...
	I1101 00:19:00.874738   30437 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tq28b" in "kube-system" namespace to be "Ready" ...
	I1101 00:19:01.071254   30437 request.go:629] Waited for 196.425442ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tq28b
	I1101 00:19:01.071337   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tq28b
	I1101 00:19:01.071343   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:01.071353   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:01.071363   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:01.074328   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:19:01.074354   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:01.074361   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:01.074367   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:01.074375   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:01 GMT
	I1101 00:19:01.074387   30437 round_trippers.go:580]     Audit-Id: e4e2460a-a86c-4d52-b71d-059ffde3ce1e
	I1101 00:19:01.074399   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:01.074422   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:01.074600   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tq28b","generateName":"kube-proxy-","namespace":"kube-system","uid":"9534d8b8-4536-4a0a-8af5-440e6871a85f","resourceVersion":"793","creationTimestamp":"2023-11-01T00:08:42Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2d674cb3-a003-4ca9-a8b5-a283ae64b7c6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d674cb3-a003-4ca9-a8b5-a283ae64b7c6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5517 chars]
	I1101 00:19:01.271556   30437 request.go:629] Waited for 196.436307ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:19:01.271629   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:19:01.271634   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:01.271642   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:01.271647   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:01.274150   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:19:01.274171   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:01.274178   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:01.274191   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:01.274198   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:01.274206   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:01.274214   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:01 GMT
	I1101 00:19:01.274221   30437 round_trippers.go:580]     Audit-Id: 539047d2-bbc6-4eca-aa72-3252d47ad669
	I1101 00:19:01.274421   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"709","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 6125 chars]
	I1101 00:19:01.274817   30437 pod_ready.go:97] node "multinode-600483" hosting pod "kube-proxy-tq28b" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-600483" has status "Ready":"False"
	I1101 00:19:01.274836   30437 pod_ready.go:81] duration metric: took 400.090324ms waiting for pod "kube-proxy-tq28b" in "kube-system" namespace to be "Ready" ...
	E1101 00:19:01.274844   30437 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-600483" hosting pod "kube-proxy-tq28b" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-600483" has status "Ready":"False"
	I1101 00:19:01.274850   30437 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:19:01.471367   30437 request.go:629] Waited for 196.420109ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-600483
	I1101 00:19:01.471472   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-600483
	I1101 00:19:01.471480   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:01.471489   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:01.471496   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:01.474266   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:19:01.474289   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:01.474297   30437 round_trippers.go:580]     Audit-Id: d8adbe40-df56-49e8-86d4-e53bb9264560
	I1101 00:19:01.474305   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:01.474310   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:01.474315   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:01.474320   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:01.474325   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:01 GMT
	I1101 00:19:01.474455   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-600483","namespace":"kube-system","uid":"9cdd0be5-035a-49f5-8796-831ebde28bf0","resourceVersion":"745","creationTimestamp":"2023-11-01T00:08:30Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"01c4e8f68a00a3553dcff3388cb56149","kubernetes.io/config.mirror":"01c4e8f68a00a3553dcff3388cb56149","kubernetes.io/config.seen":"2023-11-01T00:08:30.293495470Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4928 chars]
	I1101 00:19:01.671210   30437 request.go:629] Waited for 196.377192ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:19:01.671286   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:19:01.671292   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:01.671299   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:01.671305   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:01.674078   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:19:01.674103   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:01.674113   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:01.674120   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:01.674127   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:01 GMT
	I1101 00:19:01.674135   30437 round_trippers.go:580]     Audit-Id: 6c0e469a-f601-423c-ab5f-8f65fe97dcc4
	I1101 00:19:01.674143   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:01.674151   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:01.674388   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"709","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 6125 chars]
	I1101 00:19:01.674725   30437 pod_ready.go:97] node "multinode-600483" hosting pod "kube-scheduler-multinode-600483" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-600483" has status "Ready":"False"
	I1101 00:19:01.674748   30437 pod_ready.go:81] duration metric: took 399.890107ms waiting for pod "kube-scheduler-multinode-600483" in "kube-system" namespace to be "Ready" ...
	E1101 00:19:01.674761   30437 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-600483" hosting pod "kube-scheduler-multinode-600483" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-600483" has status "Ready":"False"
	I1101 00:19:01.674771   30437 pod_ready.go:38] duration metric: took 1.745452517s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 00:19:01.674793   30437 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 00:19:01.686652   30437 command_runner.go:130] > -16
	I1101 00:19:01.686681   30437 ops.go:34] apiserver oom_adj: -16
	I1101 00:19:01.686687   30437 kubeadm.go:640] restartCluster took 22.946968778s
	I1101 00:19:01.686695   30437 kubeadm.go:406] StartCluster complete in 22.992435846s
	I1101 00:19:01.686711   30437 settings.go:142] acquiring lock: {Name:mk7f269e64dfd8d176737f993e01f6e6badafbc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:19:01.686811   30437 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 00:19:01.687442   30437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/kubeconfig: {Name:mk08da65b6c71084e1cfafb19800038e8c8303e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:19:01.687699   30437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 00:19:01.687854   30437 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1101 00:19:01.687986   30437 config.go:182] Loaded profile config "multinode-600483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:19:01.688043   30437 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 00:19:01.690725   30437 out.go:177] * Enabled addons: 
	I1101 00:19:01.688392   30437 kapi.go:59] client config for multinode-600483: &rest.Config{Host:"https://192.168.39.130:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/client.crt", KeyFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/client.key", CAFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 00:19:01.692319   30437 addons.go:502] enable addons completed in 4.4676ms: enabled=[]
	I1101 00:19:01.692568   30437 round_trippers.go:463] GET https://192.168.39.130:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1101 00:19:01.692580   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:01.692588   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:01.692594   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:01.695319   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:19:01.695335   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:01.695342   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:01.695347   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:01.695352   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:01.695368   30437 round_trippers.go:580]     Content-Length: 291
	I1101 00:19:01.695374   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:01 GMT
	I1101 00:19:01.695379   30437 round_trippers.go:580]     Audit-Id: 017f8f36-3159-4c79-a02e-3f5e19b3b4a2
	I1101 00:19:01.695383   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:01.695406   30437 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"21004493-8bb6-43e9-8ba2-65d98d570b24","resourceVersion":"813","creationTimestamp":"2023-11-01T00:08:30Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1101 00:19:01.695556   30437 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-600483" context rescaled to 1 replicas
	I1101 00:19:01.695582   30437 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.130 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 00:19:01.698072   30437 out.go:177] * Verifying Kubernetes components...
	I1101 00:19:01.699412   30437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 00:19:01.798196   30437 command_runner.go:130] > apiVersion: v1
	I1101 00:19:01.798213   30437 command_runner.go:130] > data:
	I1101 00:19:01.798218   30437 command_runner.go:130] >   Corefile: |
	I1101 00:19:01.798222   30437 command_runner.go:130] >     .:53 {
	I1101 00:19:01.798226   30437 command_runner.go:130] >         log
	I1101 00:19:01.798230   30437 command_runner.go:130] >         errors
	I1101 00:19:01.798235   30437 command_runner.go:130] >         health {
	I1101 00:19:01.798239   30437 command_runner.go:130] >            lameduck 5s
	I1101 00:19:01.798244   30437 command_runner.go:130] >         }
	I1101 00:19:01.798249   30437 command_runner.go:130] >         ready
	I1101 00:19:01.798255   30437 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1101 00:19:01.798259   30437 command_runner.go:130] >            pods insecure
	I1101 00:19:01.798265   30437 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1101 00:19:01.798272   30437 command_runner.go:130] >            ttl 30
	I1101 00:19:01.798276   30437 command_runner.go:130] >         }
	I1101 00:19:01.798282   30437 command_runner.go:130] >         prometheus :9153
	I1101 00:19:01.798286   30437 command_runner.go:130] >         hosts {
	I1101 00:19:01.798292   30437 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I1101 00:19:01.798300   30437 command_runner.go:130] >            fallthrough
	I1101 00:19:01.798306   30437 command_runner.go:130] >         }
	I1101 00:19:01.798316   30437 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1101 00:19:01.798323   30437 command_runner.go:130] >            max_concurrent 1000
	I1101 00:19:01.798331   30437 command_runner.go:130] >         }
	I1101 00:19:01.798338   30437 command_runner.go:130] >         cache 30
	I1101 00:19:01.798348   30437 command_runner.go:130] >         loop
	I1101 00:19:01.798364   30437 command_runner.go:130] >         reload
	I1101 00:19:01.798371   30437 command_runner.go:130] >         loadbalance
	I1101 00:19:01.798380   30437 command_runner.go:130] >     }
	I1101 00:19:01.798387   30437 command_runner.go:130] > kind: ConfigMap
	I1101 00:19:01.798396   30437 command_runner.go:130] > metadata:
	I1101 00:19:01.798403   30437 command_runner.go:130] >   creationTimestamp: "2023-11-01T00:08:30Z"
	I1101 00:19:01.798412   30437 command_runner.go:130] >   name: coredns
	I1101 00:19:01.798416   30437 command_runner.go:130] >   namespace: kube-system
	I1101 00:19:01.798423   30437 command_runner.go:130] >   resourceVersion: "359"
	I1101 00:19:01.798430   30437 command_runner.go:130] >   uid: 31ab598b-b8d9-4371-84e5-236ff729854b
	I1101 00:19:01.798492   30437 node_ready.go:35] waiting up to 6m0s for node "multinode-600483" to be "Ready" ...
	I1101 00:19:01.798501   30437 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1101 00:19:01.870842   30437 request.go:629] Waited for 72.243507ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:19:01.870896   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:19:01.870901   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:01.870913   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:01.870920   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:01.873847   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:19:01.873868   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:01.873875   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:01.873881   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:01.873887   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:01.873892   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:01.873898   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:01 GMT
	I1101 00:19:01.873904   30437 round_trippers.go:580]     Audit-Id: 39f86be2-4d8b-4b64-a24d-0b1a67f1d72e
	I1101 00:19:01.874303   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"709","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 6125 chars]
	I1101 00:19:02.071075   30437 request.go:629] Waited for 196.388291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:19:02.071141   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:19:02.071149   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:02.071161   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:02.071189   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:02.074013   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:19:02.074034   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:02.074044   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:02 GMT
	I1101 00:19:02.074052   30437 round_trippers.go:580]     Audit-Id: 2be75986-2e2a-497f-b885-b0a2368dc229
	I1101 00:19:02.074060   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:02.074066   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:02.074073   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:02.074081   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:02.074392   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"709","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 6125 chars]
	I1101 00:19:02.575494   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:19:02.575519   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:02.575527   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:02.575533   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:02.578415   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:19:02.578438   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:02.578445   30437 round_trippers.go:580]     Audit-Id: fbba6bee-0fef-4f66-97c4-0ca3b912f278
	I1101 00:19:02.578451   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:02.578462   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:02.578467   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:02.578472   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:02.578477   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:02 GMT
	I1101 00:19:02.578699   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"709","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 6125 chars]
	I1101 00:19:03.075922   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:19:03.075981   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:03.075994   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:03.076005   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:03.079417   30437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:19:03.079452   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:03.079459   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:03.079464   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:03.079478   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:03.079484   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:03 GMT
	I1101 00:19:03.079489   30437 round_trippers.go:580]     Audit-Id: 74b81228-0fc0-40fe-b124-6dae233dc84c
	I1101 00:19:03.079494   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:03.079692   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"824","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1101 00:19:03.080134   30437 node_ready.go:49] node "multinode-600483" has status "Ready":"True"
	I1101 00:19:03.080156   30437 node_ready.go:38] duration metric: took 1.281637572s waiting for node "multinode-600483" to be "Ready" ...
	I1101 00:19:03.080169   30437 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 00:19:03.080285   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods
	I1101 00:19:03.080301   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:03.080313   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:03.080328   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:03.084335   30437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:19:03.084353   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:03.084360   30437 round_trippers.go:580]     Audit-Id: 556e7578-6ea4-4bbd-9225-d64f44db6a23
	I1101 00:19:03.084365   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:03.084370   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:03.084375   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:03.084392   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:03.084397   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:03 GMT
	I1101 00:19:03.086054   30437 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"824"},"items":[{"metadata":{"name":"coredns-5dd5756b68-rpvvn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d8ab0ebb-aa1f-4143-b987-6c1ae065954a","resourceVersion":"746","creationTimestamp":"2023-11-01T00:08:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15779dee-f1e7-4836-aba2-2d57728c2309","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15779dee-f1e7-4836-aba2-2d57728c2309\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82946 chars]
	I1101 00:19:03.088496   30437 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rpvvn" in "kube-system" namespace to be "Ready" ...
	I1101 00:19:03.088575   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rpvvn
	I1101 00:19:03.088583   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:03.088590   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:03.088600   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:03.090939   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:19:03.090957   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:03.090977   30437 round_trippers.go:580]     Audit-Id: 11a5726e-2af7-42c3-8dde-f4f6f840a6db
	I1101 00:19:03.090988   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:03.090996   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:03.091004   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:03.091015   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:03.091023   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:03 GMT
	I1101 00:19:03.091375   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rpvvn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d8ab0ebb-aa1f-4143-b987-6c1ae065954a","resourceVersion":"746","creationTimestamp":"2023-11-01T00:08:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15779dee-f1e7-4836-aba2-2d57728c2309","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15779dee-f1e7-4836-aba2-2d57728c2309\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1101 00:19:03.091979   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:19:03.091997   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:03.092009   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:03.092017   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:03.094203   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:19:03.094218   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:03.094224   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:03.094231   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:03.094240   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:03.094249   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:03.094257   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:03 GMT
	I1101 00:19:03.094282   30437 round_trippers.go:580]     Audit-Id: 601a775b-fc3c-4944-99f7-29fa6558368c
	I1101 00:19:03.094514   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"824","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1101 00:19:03.271395   30437 request.go:629] Waited for 176.546368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rpvvn
	I1101 00:19:03.271478   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rpvvn
	I1101 00:19:03.271487   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:03.271495   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:03.271501   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:03.274203   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:19:03.274222   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:03.274231   30437 round_trippers.go:580]     Audit-Id: cb91f6dd-e153-47e4-b21d-9803ac380a9a
	I1101 00:19:03.274239   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:03.274248   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:03.274260   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:03.274273   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:03.274284   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:03 GMT
	I1101 00:19:03.274458   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rpvvn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d8ab0ebb-aa1f-4143-b987-6c1ae065954a","resourceVersion":"746","creationTimestamp":"2023-11-01T00:08:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15779dee-f1e7-4836-aba2-2d57728c2309","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15779dee-f1e7-4836-aba2-2d57728c2309\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1101 00:19:03.471320   30437 request.go:629] Waited for 196.392413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:19:03.471415   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:19:03.471426   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:03.471437   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:03.471446   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:03.474382   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:19:03.474404   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:03.474411   30437 round_trippers.go:580]     Audit-Id: 9ffc94d2-4e54-45d7-9cd5-e0d924ce799d
	I1101 00:19:03.474416   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:03.474429   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:03.474439   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:03.474448   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:03.474459   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:03 GMT
	I1101 00:19:03.474632   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"824","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1101 00:19:03.975879   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rpvvn
	I1101 00:19:03.975902   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:03.975913   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:03.975919   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:03.979898   30437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:19:03.979924   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:03.979952   30437 round_trippers.go:580]     Audit-Id: 91f68285-f086-463b-a01f-52673a5afa6e
	I1101 00:19:03.979962   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:03.979971   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:03.979977   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:03.979983   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:03.979992   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:03 GMT
	I1101 00:19:03.980471   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rpvvn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d8ab0ebb-aa1f-4143-b987-6c1ae065954a","resourceVersion":"746","creationTimestamp":"2023-11-01T00:08:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15779dee-f1e7-4836-aba2-2d57728c2309","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15779dee-f1e7-4836-aba2-2d57728c2309\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1101 00:19:03.981030   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:19:03.981048   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:03.981059   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:03.981069   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:03.984101   30437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:19:03.984125   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:03.984132   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:03.984138   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:03.984144   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:03.984149   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:03 GMT
	I1101 00:19:03.984155   30437 round_trippers.go:580]     Audit-Id: bb7b98a1-d269-40bd-a583-fd9d7fce9176
	I1101 00:19:03.984161   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:03.984613   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"824","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1101 00:19:04.475244   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rpvvn
	I1101 00:19:04.475281   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:04.475289   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:04.475295   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:04.478564   30437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:19:04.478589   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:04.478600   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:04.478608   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:04.478618   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:04 GMT
	I1101 00:19:04.478626   30437 round_trippers.go:580]     Audit-Id: 2828ed15-845b-4576-b1d7-985c939ec1ea
	I1101 00:19:04.478648   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:04.478665   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:04.479289   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rpvvn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d8ab0ebb-aa1f-4143-b987-6c1ae065954a","resourceVersion":"746","creationTimestamp":"2023-11-01T00:08:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15779dee-f1e7-4836-aba2-2d57728c2309","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15779dee-f1e7-4836-aba2-2d57728c2309\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1101 00:19:04.479755   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:19:04.479771   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:04.479778   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:04.479784   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:04.482054   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:19:04.482074   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:04.482081   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:04.482091   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:04.482097   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:04 GMT
	I1101 00:19:04.482103   30437 round_trippers.go:580]     Audit-Id: 7cb39d00-a437-419f-9b4f-aabb67c5a18b
	I1101 00:19:04.482108   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:04.482113   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:04.482293   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"824","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1101 00:19:04.976015   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rpvvn
	I1101 00:19:04.976039   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:04.976047   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:04.976052   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:04.978903   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:19:04.978930   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:04.978938   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:04.978946   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:04.978953   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:04.978960   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:04 GMT
	I1101 00:19:04.978968   30437 round_trippers.go:580]     Audit-Id: 17c1904d-9409-4909-bda0-7055d596a82e
	I1101 00:19:04.978976   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:04.979278   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rpvvn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d8ab0ebb-aa1f-4143-b987-6c1ae065954a","resourceVersion":"746","creationTimestamp":"2023-11-01T00:08:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15779dee-f1e7-4836-aba2-2d57728c2309","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15779dee-f1e7-4836-aba2-2d57728c2309\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1101 00:19:04.979717   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:19:04.979730   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:04.979737   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:04.979746   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:04.981989   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:19:04.982011   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:04.982021   30437 round_trippers.go:580]     Audit-Id: e0212936-350c-4855-9abd-dd18450b5b0c
	I1101 00:19:04.982030   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:04.982037   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:04.982045   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:04.982053   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:04.982062   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:04 GMT
	I1101 00:19:04.982408   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"824","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1101 00:19:05.476185   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rpvvn
	I1101 00:19:05.476209   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:05.476217   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:05.476223   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:05.478897   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:19:05.478928   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:05.478937   30437 round_trippers.go:580]     Audit-Id: 69d5d8c9-20ed-4d8f-bfc1-303ede0a6a58
	I1101 00:19:05.478945   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:05.478953   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:05.478960   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:05.478966   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:05.478974   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:05 GMT
	I1101 00:19:05.479142   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rpvvn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d8ab0ebb-aa1f-4143-b987-6c1ae065954a","resourceVersion":"746","creationTimestamp":"2023-11-01T00:08:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15779dee-f1e7-4836-aba2-2d57728c2309","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15779dee-f1e7-4836-aba2-2d57728c2309\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1101 00:19:05.479567   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:19:05.479581   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:05.479591   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:05.479599   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:05.482224   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:19:05.482248   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:05.482255   30437 round_trippers.go:580]     Audit-Id: 89565b00-e099-4bf3-b730-955a717f5e2b
	I1101 00:19:05.482261   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:05.482266   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:05.482271   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:05.482276   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:05.482281   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:05 GMT
	I1101 00:19:05.482644   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"824","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1101 00:19:05.482938   30437 pod_ready.go:102] pod "coredns-5dd5756b68-rpvvn" in "kube-system" namespace has status "Ready":"False"
	I1101 00:19:05.975278   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rpvvn
	I1101 00:19:05.975300   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:05.975308   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:05.975314   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:05.983234   30437 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1101 00:19:05.983265   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:05.983275   30437 round_trippers.go:580]     Audit-Id: f5710032-a519-4de4-beb2-284bf819fc9a
	I1101 00:19:05.983283   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:05.983291   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:05.983298   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:05.983306   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:05.983313   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:05 GMT
	I1101 00:19:05.983507   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rpvvn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d8ab0ebb-aa1f-4143-b987-6c1ae065954a","resourceVersion":"746","creationTimestamp":"2023-11-01T00:08:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15779dee-f1e7-4836-aba2-2d57728c2309","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15779dee-f1e7-4836-aba2-2d57728c2309\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1101 00:19:05.983990   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:19:05.984003   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:05.984011   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:05.984017   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:05.989172   30437 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1101 00:19:05.989193   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:05.989201   30437 round_trippers.go:580]     Audit-Id: b22a276f-c22b-4a93-8a39-cf173b455612
	I1101 00:19:05.989210   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:05.989215   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:05.989220   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:05.989225   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:05.989230   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:05 GMT
	I1101 00:19:05.989394   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"824","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1101 00:19:06.475470   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rpvvn
	I1101 00:19:06.475495   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:06.475503   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:06.475508   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:06.478737   30437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:19:06.478763   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:06.478770   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:06.478775   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:06.478780   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:06.478785   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:06.478790   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:06 GMT
	I1101 00:19:06.478795   30437 round_trippers.go:580]     Audit-Id: ba640be4-cdda-4baa-af35-f3e954491a80
	I1101 00:19:06.479199   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rpvvn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d8ab0ebb-aa1f-4143-b987-6c1ae065954a","resourceVersion":"833","creationTimestamp":"2023-11-01T00:08:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15779dee-f1e7-4836-aba2-2d57728c2309","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15779dee-f1e7-4836-aba2-2d57728c2309\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1101 00:19:06.479640   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:19:06.479651   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:06.479658   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:06.479664   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:06.482077   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:19:06.482097   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:06.482104   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:06.482109   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:06.482114   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:06 GMT
	I1101 00:19:06.482119   30437 round_trippers.go:580]     Audit-Id: 299d3963-08a4-42ad-bd7a-d5e80e2d583d
	I1101 00:19:06.482124   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:06.482129   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:06.482348   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"824","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1101 00:19:06.482657   30437 pod_ready.go:92] pod "coredns-5dd5756b68-rpvvn" in "kube-system" namespace has status "Ready":"True"
	I1101 00:19:06.482673   30437 pod_ready.go:81] duration metric: took 3.394145802s waiting for pod "coredns-5dd5756b68-rpvvn" in "kube-system" namespace to be "Ready" ...
	I1101 00:19:06.482681   30437 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:19:06.482727   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-600483
	I1101 00:19:06.482738   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:06.482744   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:06.482751   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:06.485183   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:19:06.485200   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:06.485206   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:06.485211   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:06.485216   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:06.485221   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:06.485229   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:06 GMT
	I1101 00:19:06.485237   30437 round_trippers.go:580]     Audit-Id: 9ebb57cd-038b-45e8-96c3-89e6643ed56e
	I1101 00:19:06.485387   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-600483","namespace":"kube-system","uid":"c612ebac-fa1d-474a-b8cd-5e922a5f76dd","resourceVersion":"827","creationTimestamp":"2023-11-01T00:08:30Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.130:2379","kubernetes.io/config.hash":"5629fb0a0414e85632f97c416152ffbb","kubernetes.io/config.mirror":"5629fb0a0414e85632f97c416152ffbb","kubernetes.io/config.seen":"2023-11-01T00:08:30.293496672Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1101 00:19:06.485739   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:19:06.485754   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:06.485761   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:06.485767   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:06.487646   30437 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1101 00:19:06.487666   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:06.487676   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:06.487682   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:06.487687   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:06 GMT
	I1101 00:19:06.487693   30437 round_trippers.go:580]     Audit-Id: b3e5eb20-7257-4dd3-8f04-6cb3d12cb5e6
	I1101 00:19:06.487697   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:06.487704   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:06.487827   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"824","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1101 00:19:06.488160   30437 pod_ready.go:92] pod "etcd-multinode-600483" in "kube-system" namespace has status "Ready":"True"
	I1101 00:19:06.488175   30437 pod_ready.go:81] duration metric: took 5.48958ms waiting for pod "etcd-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:19:06.488190   30437 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:19:06.488243   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-600483
	I1101 00:19:06.488253   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:06.488259   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:06.488265   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:06.490388   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:19:06.490411   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:06.490418   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:06.490423   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:06.490429   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:06 GMT
	I1101 00:19:06.490435   30437 round_trippers.go:580]     Audit-Id: 6dcb9350-a6e4-47fb-bdf0-d0699315ee5e
	I1101 00:19:06.490440   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:06.490445   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:06.490615   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-600483","namespace":"kube-system","uid":"bd94a63a-62c2-4654-aaf0-2e9df086b168","resourceVersion":"750","creationTimestamp":"2023-11-01T00:08:30Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.130:8443","kubernetes.io/config.hash":"99a9cda13526c350638742a7c7b2ba52","kubernetes.io/config.mirror":"99a9cda13526c350638742a7c7b2ba52","kubernetes.io/config.seen":"2023-11-01T00:08:30.293497612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I1101 00:19:06.671388   30437 request.go:629] Waited for 180.376832ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:19:06.671485   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:19:06.671494   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:06.671504   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:06.671513   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:06.674617   30437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:19:06.674643   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:06.674650   30437 round_trippers.go:580]     Audit-Id: f8960c4a-2217-4ea8-a50a-2928a69c50e8
	I1101 00:19:06.674656   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:06.674661   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:06.674666   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:06.674671   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:06.674676   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:06 GMT
	I1101 00:19:06.674986   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"824","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1101 00:19:06.870699   30437 request.go:629] Waited for 195.349526ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-600483
	I1101 00:19:06.870795   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-600483
	I1101 00:19:06.870828   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:06.870848   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:06.870863   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:06.874283   30437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:19:06.874306   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:06.874316   30437 round_trippers.go:580]     Audit-Id: a28944a0-89e9-4659-8e31-4c7cd9b94179
	I1101 00:19:06.874325   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:06.874331   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:06.874338   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:06.874346   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:06.874354   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:06 GMT
	I1101 00:19:06.874607   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-600483","namespace":"kube-system","uid":"bd94a63a-62c2-4654-aaf0-2e9df086b168","resourceVersion":"750","creationTimestamp":"2023-11-01T00:08:30Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.130:8443","kubernetes.io/config.hash":"99a9cda13526c350638742a7c7b2ba52","kubernetes.io/config.mirror":"99a9cda13526c350638742a7c7b2ba52","kubernetes.io/config.seen":"2023-11-01T00:08:30.293497612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I1101 00:19:07.071627   30437 request.go:629] Waited for 196.467339ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:19:07.071697   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:19:07.071704   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:07.071715   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:07.071730   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:07.077279   30437 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1101 00:19:07.077304   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:07.077311   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:07 GMT
	I1101 00:19:07.077317   30437 round_trippers.go:580]     Audit-Id: 4948f183-1ab0-4087-9b12-0253ce2f8e25
	I1101 00:19:07.077322   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:07.077326   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:07.077331   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:07.077336   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:07.077740   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"824","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1101 00:19:07.578914   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-600483
	I1101 00:19:07.578940   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:07.578948   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:07.578954   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:07.582388   30437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:19:07.582415   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:07.582425   30437 round_trippers.go:580]     Audit-Id: 1c4b7854-f03f-497e-9933-5776a97b3d7c
	I1101 00:19:07.582433   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:07.582441   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:07.582450   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:07.582476   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:07.582487   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:07 GMT
	I1101 00:19:07.582801   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-600483","namespace":"kube-system","uid":"bd94a63a-62c2-4654-aaf0-2e9df086b168","resourceVersion":"750","creationTimestamp":"2023-11-01T00:08:30Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.130:8443","kubernetes.io/config.hash":"99a9cda13526c350638742a7c7b2ba52","kubernetes.io/config.mirror":"99a9cda13526c350638742a7c7b2ba52","kubernetes.io/config.seen":"2023-11-01T00:08:30.293497612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I1101 00:19:07.583200   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:19:07.583212   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:07.583219   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:07.583225   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:07.585666   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:19:07.585691   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:07.585702   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:07 GMT
	I1101 00:19:07.585710   30437 round_trippers.go:580]     Audit-Id: 215dd1d5-eebd-4676-be9a-6ca9fdfcf2e7
	I1101 00:19:07.585719   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:07.585732   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:07.585746   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:07.585758   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:07.585913   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"824","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1101 00:19:08.078224   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-600483
	I1101 00:19:08.078240   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:08.078249   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:08.078257   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:08.081868   30437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:19:08.081892   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:08.081904   30437 round_trippers.go:580]     Audit-Id: 036e83ce-7619-4817-9d55-481ad079977c
	I1101 00:19:08.081913   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:08.081919   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:08.081924   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:08.081932   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:08.081937   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:08 GMT
	I1101 00:19:08.082453   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-600483","namespace":"kube-system","uid":"bd94a63a-62c2-4654-aaf0-2e9df086b168","resourceVersion":"750","creationTimestamp":"2023-11-01T00:08:30Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.130:8443","kubernetes.io/config.hash":"99a9cda13526c350638742a7c7b2ba52","kubernetes.io/config.mirror":"99a9cda13526c350638742a7c7b2ba52","kubernetes.io/config.seen":"2023-11-01T00:08:30.293497612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I1101 00:19:08.082952   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:19:08.082967   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:08.082975   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:08.082985   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:08.086286   30437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:19:08.086307   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:08.086316   30437 round_trippers.go:580]     Audit-Id: e10a0c26-0b47-4f9d-b53f-e19deb0fcb9f
	I1101 00:19:08.086324   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:08.086340   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:08.086347   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:08.086358   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:08.086366   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:08 GMT
	I1101 00:19:08.086529   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"824","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1101 00:19:08.578468   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-600483
	I1101 00:19:08.578495   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:08.578507   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:08.578517   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:08.581284   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:19:08.581302   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:08.581309   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:08 GMT
	I1101 00:19:08.581314   30437 round_trippers.go:580]     Audit-Id: d1f3747f-a9dd-491b-bba4-7b6f3e8977d1
	I1101 00:19:08.581320   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:08.581369   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:08.581378   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:08.581384   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:08.581599   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-600483","namespace":"kube-system","uid":"bd94a63a-62c2-4654-aaf0-2e9df086b168","resourceVersion":"750","creationTimestamp":"2023-11-01T00:08:30Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.130:8443","kubernetes.io/config.hash":"99a9cda13526c350638742a7c7b2ba52","kubernetes.io/config.mirror":"99a9cda13526c350638742a7c7b2ba52","kubernetes.io/config.seen":"2023-11-01T00:08:30.293497612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I1101 00:19:08.582133   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:19:08.582158   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:08.582170   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:08.582179   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:08.586776   30437 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1101 00:19:08.586795   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:08.586801   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:08.586807   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:08 GMT
	I1101 00:19:08.586850   30437 round_trippers.go:580]     Audit-Id: aa4dbafe-33e2-408a-8094-391d8db81292
	I1101 00:19:08.586860   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:08.586865   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:08.586871   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:08.587070   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"824","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1101 00:19:08.587495   30437 pod_ready.go:102] pod "kube-apiserver-multinode-600483" in "kube-system" namespace has status "Ready":"False"
	I1101 00:19:09.078989   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-600483
	I1101 00:19:09.079012   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:09.079020   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:09.079025   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:09.081238   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:19:09.081256   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:09.081263   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:09.081272   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:09.081279   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:09.081287   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:09 GMT
	I1101 00:19:09.081296   30437 round_trippers.go:580]     Audit-Id: 92071a8d-5d74-4a21-bfdf-519e5c351a38
	I1101 00:19:09.081303   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:09.081422   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-600483","namespace":"kube-system","uid":"bd94a63a-62c2-4654-aaf0-2e9df086b168","resourceVersion":"843","creationTimestamp":"2023-11-01T00:08:30Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.130:8443","kubernetes.io/config.hash":"99a9cda13526c350638742a7c7b2ba52","kubernetes.io/config.mirror":"99a9cda13526c350638742a7c7b2ba52","kubernetes.io/config.seen":"2023-11-01T00:08:30.293497612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1101 00:19:09.081824   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:19:09.081837   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:09.081843   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:09.081849   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:09.086117   30437 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1101 00:19:09.086138   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:09.086145   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:09.086150   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:09 GMT
	I1101 00:19:09.086155   30437 round_trippers.go:580]     Audit-Id: f022d744-3d21-44fb-ae40-3003c7b127da
	I1101 00:19:09.086160   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:09.086165   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:09.086170   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:09.086272   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"824","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1101 00:19:09.086553   30437 pod_ready.go:92] pod "kube-apiserver-multinode-600483" in "kube-system" namespace has status "Ready":"True"
	I1101 00:19:09.086567   30437 pod_ready.go:81] duration metric: took 2.598371235s waiting for pod "kube-apiserver-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:19:09.086575   30437 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:19:09.086622   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-600483
	I1101 00:19:09.086629   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:09.086636   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:09.086641   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:09.090315   30437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:19:09.090334   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:09.090344   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:09.090356   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:09 GMT
	I1101 00:19:09.090365   30437 round_trippers.go:580]     Audit-Id: 1dc43603-b809-43f6-a39f-48980ace445c
	I1101 00:19:09.090373   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:09.090379   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:09.090384   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:09.090524   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-600483","namespace":"kube-system","uid":"9dd41877-c6ea-4591-90e1-632a234ffcf6","resourceVersion":"845","creationTimestamp":"2023-11-01T00:08:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f2b1fcba8b34b1f65e600fae0bd4374a","kubernetes.io/config.mirror":"f2b1fcba8b34b1f65e600fae0bd4374a","kubernetes.io/config.seen":"2023-11-01T00:08:20.448799328Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1101 00:19:09.090878   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:19:09.090891   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:09.090898   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:09.090907   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:09.094421   30437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:19:09.094440   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:09.094447   30437 round_trippers.go:580]     Audit-Id: b0b9b2dc-a3e3-4546-a4bb-4de50e0352c7
	I1101 00:19:09.094457   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:09.094462   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:09.094467   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:09.094472   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:09.094478   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:09 GMT
	I1101 00:19:09.094573   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"824","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1101 00:19:09.094861   30437 pod_ready.go:92] pod "kube-controller-manager-multinode-600483" in "kube-system" namespace has status "Ready":"True"
	I1101 00:19:09.094875   30437 pod_ready.go:81] duration metric: took 8.293496ms waiting for pod "kube-controller-manager-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:19:09.094886   30437 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7kvtf" in "kube-system" namespace to be "Ready" ...
	I1101 00:19:09.271377   30437 request.go:629] Waited for 176.437015ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7kvtf
	I1101 00:19:09.271461   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7kvtf
	I1101 00:19:09.271468   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:09.271476   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:09.271482   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:09.274649   30437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:19:09.274673   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:09.274681   30437 round_trippers.go:580]     Audit-Id: de330cc9-09e0-4d8c-994d-c28b2bde3733
	I1101 00:19:09.274686   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:09.274691   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:09.274696   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:09.274701   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:09.274706   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:09 GMT
	I1101 00:19:09.274890   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7kvtf","generateName":"kube-proxy-","namespace":"kube-system","uid":"e2101b7f-e517-4100-905d-f46517e68255","resourceVersion":"469","creationTimestamp":"2023-11-01T00:09:23Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2d674cb3-a003-4ca9-a8b5-a283ae64b7c6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d674cb3-a003-4ca9-a8b5-a283ae64b7c6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5525 chars]
	I1101 00:19:09.470730   30437 request.go:629] Waited for 195.333448ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m02
	I1101 00:19:09.470811   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m02
	I1101 00:19:09.470816   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:09.470824   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:09.470830   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:09.473668   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:19:09.473699   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:09.473708   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:09.473716   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:09.473724   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:09.473732   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:09 GMT
	I1101 00:19:09.473740   30437 round_trippers.go:580]     Audit-Id: 3a51fda1-1831-471b-aef8-cc22088a04b8
	I1101 00:19:09.473749   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:09.473837   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483-m02","uid":"5b2b1f13-2a35-43d5-86a5-bb5c1d6395e1","resourceVersion":"700","creationTimestamp":"2023-11-01T00:09:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 3684 chars]
	I1101 00:19:09.474068   30437 pod_ready.go:92] pod "kube-proxy-7kvtf" in "kube-system" namespace has status "Ready":"True"
	I1101 00:19:09.474079   30437 pod_ready.go:81] duration metric: took 379.188588ms waiting for pod "kube-proxy-7kvtf" in "kube-system" namespace to be "Ready" ...
	I1101 00:19:09.474088   30437 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-84g2n" in "kube-system" namespace to be "Ready" ...
	I1101 00:19:09.671553   30437 request.go:629] Waited for 197.394271ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-proxy-84g2n
	I1101 00:19:09.671642   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-proxy-84g2n
	I1101 00:19:09.671655   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:09.671663   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:09.671669   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:09.674618   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:19:09.674640   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:09.674647   30437 round_trippers.go:580]     Audit-Id: b37a2a24-1b12-4b8d-9dbd-cdf56fb2856d
	I1101 00:19:09.674655   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:09.674664   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:09.674673   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:09.674684   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:09.674696   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:09 GMT
	I1101 00:19:09.674951   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-84g2n","generateName":"kube-proxy-","namespace":"kube-system","uid":"a98efae3-9303-43be-a139-d21a5630c6b8","resourceVersion":"680","creationTimestamp":"2023-11-01T00:10:15Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2d674cb3-a003-4ca9-a8b5-a283ae64b7c6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:10:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d674cb3-a003-4ca9-a8b5-a283ae64b7c6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5517 chars]
	I1101 00:19:09.870718   30437 request.go:629] Waited for 195.35046ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m03
	I1101 00:19:09.870823   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m03
	I1101 00:19:09.870834   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:09.870842   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:09.870848   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:09.873234   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:19:09.873259   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:09.873267   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:09.873272   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:09 GMT
	I1101 00:19:09.873277   30437 round_trippers.go:580]     Audit-Id: 00b4609e-26e0-4fee-b814-8a8cf90897f6
	I1101 00:19:09.873282   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:09.873287   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:09.873292   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:09.873424   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483-m03","uid":"5050dc91-014d-4a1c-b839-f60403866911","resourceVersion":"707","creationTimestamp":"2023-11-01T00:10:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3411 chars]
	I1101 00:19:09.873694   30437 pod_ready.go:92] pod "kube-proxy-84g2n" in "kube-system" namespace has status "Ready":"True"
	I1101 00:19:09.873709   30437 pod_ready.go:81] duration metric: took 399.616256ms waiting for pod "kube-proxy-84g2n" in "kube-system" namespace to be "Ready" ...
	I1101 00:19:09.873718   30437 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tq28b" in "kube-system" namespace to be "Ready" ...
	I1101 00:19:10.071182   30437 request.go:629] Waited for 197.392947ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tq28b
	I1101 00:19:10.071248   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tq28b
	I1101 00:19:10.071253   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:10.071260   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:10.071267   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:10.074625   30437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:19:10.074648   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:10.074660   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:10.074666   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:10 GMT
	I1101 00:19:10.074672   30437 round_trippers.go:580]     Audit-Id: f36d0d7c-672f-4e79-8c6a-7bdcaad5565d
	I1101 00:19:10.074687   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:10.074693   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:10.074699   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:10.074807   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tq28b","generateName":"kube-proxy-","namespace":"kube-system","uid":"9534d8b8-4536-4a0a-8af5-440e6871a85f","resourceVersion":"793","creationTimestamp":"2023-11-01T00:08:42Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2d674cb3-a003-4ca9-a8b5-a283ae64b7c6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d674cb3-a003-4ca9-a8b5-a283ae64b7c6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5517 chars]
	I1101 00:19:10.271667   30437 request.go:629] Waited for 196.423559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:19:10.271750   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:19:10.271759   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:10.271816   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:10.271833   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:10.275122   30437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:19:10.275158   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:10.275169   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:10.275176   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:10.275182   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:10.275188   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:10.275193   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:10 GMT
	I1101 00:19:10.275199   30437 round_trippers.go:580]     Audit-Id: e8565e2d-c297-4f06-b53a-f73a3803d0f9
	I1101 00:19:10.275429   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"824","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1101 00:19:10.275837   30437 pod_ready.go:92] pod "kube-proxy-tq28b" in "kube-system" namespace has status "Ready":"True"
	I1101 00:19:10.275861   30437 pod_ready.go:81] duration metric: took 402.132649ms waiting for pod "kube-proxy-tq28b" in "kube-system" namespace to be "Ready" ...
	I1101 00:19:10.275870   30437 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:19:10.471376   30437 request.go:629] Waited for 195.449165ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-600483
	I1101 00:19:10.471431   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-600483
	I1101 00:19:10.471436   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:10.471443   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:10.471451   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:10.474294   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:19:10.474325   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:10.474335   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:10 GMT
	I1101 00:19:10.474343   30437 round_trippers.go:580]     Audit-Id: 175335fe-b354-4e7b-9807-790009d069df
	I1101 00:19:10.474350   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:10.474358   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:10.474366   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:10.474379   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:10.474564   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-600483","namespace":"kube-system","uid":"9cdd0be5-035a-49f5-8796-831ebde28bf0","resourceVersion":"826","creationTimestamp":"2023-11-01T00:08:30Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"01c4e8f68a00a3553dcff3388cb56149","kubernetes.io/config.mirror":"01c4e8f68a00a3553dcff3388cb56149","kubernetes.io/config.seen":"2023-11-01T00:08:30.293495470Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1101 00:19:10.671394   30437 request.go:629] Waited for 196.435258ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:19:10.671488   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:19:10.671495   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:10.671503   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:10.671511   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:10.674265   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:19:10.674291   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:10.674302   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:10.674311   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:10 GMT
	I1101 00:19:10.674320   30437 round_trippers.go:580]     Audit-Id: 44f17cfb-0712-43ea-b6d6-95d91e280fc6
	I1101 00:19:10.674329   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:10.674336   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:10.674341   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:10.674553   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"824","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1101 00:19:10.674862   30437 pod_ready.go:92] pod "kube-scheduler-multinode-600483" in "kube-system" namespace has status "Ready":"True"
	I1101 00:19:10.674877   30437 pod_ready.go:81] duration metric: took 399.001961ms waiting for pod "kube-scheduler-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:19:10.674887   30437 pod_ready.go:38] duration metric: took 7.594696616s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 00:19:10.674899   30437 api_server.go:52] waiting for apiserver process to appear ...
	I1101 00:19:10.674952   30437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:19:10.689335   30437 command_runner.go:130] > 1075
	I1101 00:19:10.689371   30437 api_server.go:72] duration metric: took 8.9937712s to wait for apiserver process to appear ...
	I1101 00:19:10.689381   30437 api_server.go:88] waiting for apiserver healthz status ...
	I1101 00:19:10.689393   30437 api_server.go:253] Checking apiserver healthz at https://192.168.39.130:8443/healthz ...
	I1101 00:19:10.694437   30437 api_server.go:279] https://192.168.39.130:8443/healthz returned 200:
	ok
	I1101 00:19:10.694533   30437 round_trippers.go:463] GET https://192.168.39.130:8443/version
	I1101 00:19:10.694543   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:10.694551   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:10.694564   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:10.695946   30437 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1101 00:19:10.695969   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:10.695976   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:10 GMT
	I1101 00:19:10.695981   30437 round_trippers.go:580]     Audit-Id: 1fa1614a-2acf-4241-9efb-406370680c92
	I1101 00:19:10.695986   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:10.695991   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:10.695996   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:10.696001   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:10.696006   30437 round_trippers.go:580]     Content-Length: 264
	I1101 00:19:10.696021   30437 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1101 00:19:10.696067   30437 api_server.go:141] control plane version: v1.28.3
	I1101 00:19:10.696080   30437 api_server.go:131] duration metric: took 6.695181ms to wait for apiserver health ...
	I1101 00:19:10.696087   30437 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 00:19:10.871544   30437 request.go:629] Waited for 175.379645ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods
	I1101 00:19:10.871618   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods
	I1101 00:19:10.871626   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:10.871637   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:10.871649   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:10.876443   30437 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1101 00:19:10.876486   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:10.876493   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:10.876499   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:10 GMT
	I1101 00:19:10.876504   30437 round_trippers.go:580]     Audit-Id: 8ad0d925-224a-4e17-88c9-7fd0fefbfe1b
	I1101 00:19:10.876509   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:10.876514   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:10.876519   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:10.878208   30437 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"856"},"items":[{"metadata":{"name":"coredns-5dd5756b68-rpvvn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d8ab0ebb-aa1f-4143-b987-6c1ae065954a","resourceVersion":"833","creationTimestamp":"2023-11-01T00:08:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15779dee-f1e7-4836-aba2-2d57728c2309","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15779dee-f1e7-4836-aba2-2d57728c2309\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81867 chars]
	I1101 00:19:10.881779   30437 system_pods.go:59] 12 kube-system pods found
	I1101 00:19:10.881812   30437 system_pods.go:61] "coredns-5dd5756b68-rpvvn" [d8ab0ebb-aa1f-4143-b987-6c1ae065954a] Running
	I1101 00:19:10.881820   30437 system_pods.go:61] "etcd-multinode-600483" [c612ebac-fa1d-474a-b8cd-5e922a5f76dd] Running
	I1101 00:19:10.881830   30437 system_pods.go:61] "kindnet-d4f6q" [d5c9428a-a6ef-44a8-b3c8-f65e25e9d4a9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 00:19:10.881839   30437 system_pods.go:61] "kindnet-l75r4" [abfa8ec3-0565-4927-a07c-9fed1240d270] Running
	I1101 00:19:10.881848   30437 system_pods.go:61] "kindnet-ldrkn" [3d2ad5a0-69f9-4bd2-8bd8-503b7f7602a9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 00:19:10.881855   30437 system_pods.go:61] "kube-apiserver-multinode-600483" [bd94a63a-62c2-4654-aaf0-2e9df086b168] Running
	I1101 00:19:10.881870   30437 system_pods.go:61] "kube-controller-manager-multinode-600483" [9dd41877-c6ea-4591-90e1-632a234ffcf6] Running
	I1101 00:19:10.881879   30437 system_pods.go:61] "kube-proxy-7kvtf" [e2101b7f-e517-4100-905d-f46517e68255] Running
	I1101 00:19:10.881884   30437 system_pods.go:61] "kube-proxy-84g2n" [a98efae3-9303-43be-a139-d21a5630c6b8] Running
	I1101 00:19:10.881891   30437 system_pods.go:61] "kube-proxy-tq28b" [9534d8b8-4536-4a0a-8af5-440e6871a85f] Running
	I1101 00:19:10.881897   30437 system_pods.go:61] "kube-scheduler-multinode-600483" [9cdd0be5-035a-49f5-8796-831ebde28bf0] Running
	I1101 00:19:10.881907   30437 system_pods.go:61] "storage-provisioner" [a67f136b-7645-4eb9-9568-52e3ab06d66e] Running
	I1101 00:19:10.881919   30437 system_pods.go:74] duration metric: took 185.825909ms to wait for pod list to return data ...
	I1101 00:19:10.881931   30437 default_sa.go:34] waiting for default service account to be created ...
	I1101 00:19:11.071396   30437 request.go:629] Waited for 189.39232ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/namespaces/default/serviceaccounts
	I1101 00:19:11.071486   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/default/serviceaccounts
	I1101 00:19:11.071492   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:11.071500   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:11.071506   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:11.074621   30437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:19:11.074644   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:11.074651   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:11.074656   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:11.074662   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:11.074669   30437 round_trippers.go:580]     Content-Length: 261
	I1101 00:19:11.074674   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:11 GMT
	I1101 00:19:11.074679   30437 round_trippers.go:580]     Audit-Id: 2c3c4385-1b16-4a05-82d8-2bbf0e311438
	I1101 00:19:11.074684   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:11.074705   30437 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"856"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"04a1b135-6f95-4452-a7aa-e2cd772cc1b9","resourceVersion":"301","creationTimestamp":"2023-11-01T00:08:42Z"}}]}
	I1101 00:19:11.074881   30437 default_sa.go:45] found service account: "default"
	I1101 00:19:11.074903   30437 default_sa.go:55] duration metric: took 192.963641ms for default service account to be created ...
	I1101 00:19:11.074911   30437 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 00:19:11.271588   30437 request.go:629] Waited for 196.608231ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods
	I1101 00:19:11.271663   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods
	I1101 00:19:11.271669   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:11.271677   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:11.271689   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:11.276211   30437 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1101 00:19:11.276246   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:11.276258   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:11.276266   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:11.276273   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:11.276281   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:11 GMT
	I1101 00:19:11.276290   30437 round_trippers.go:580]     Audit-Id: 7c22aeea-0c0d-4759-acce-4ac7be09d150
	I1101 00:19:11.276303   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:11.277643   30437 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"856"},"items":[{"metadata":{"name":"coredns-5dd5756b68-rpvvn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d8ab0ebb-aa1f-4143-b987-6c1ae065954a","resourceVersion":"833","creationTimestamp":"2023-11-01T00:08:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15779dee-f1e7-4836-aba2-2d57728c2309","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15779dee-f1e7-4836-aba2-2d57728c2309\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81867 chars]
	I1101 00:19:11.280526   30437 system_pods.go:86] 12 kube-system pods found
	I1101 00:19:11.280551   30437 system_pods.go:89] "coredns-5dd5756b68-rpvvn" [d8ab0ebb-aa1f-4143-b987-6c1ae065954a] Running
	I1101 00:19:11.280556   30437 system_pods.go:89] "etcd-multinode-600483" [c612ebac-fa1d-474a-b8cd-5e922a5f76dd] Running
	I1101 00:19:11.280565   30437 system_pods.go:89] "kindnet-d4f6q" [d5c9428a-a6ef-44a8-b3c8-f65e25e9d4a9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 00:19:11.280570   30437 system_pods.go:89] "kindnet-l75r4" [abfa8ec3-0565-4927-a07c-9fed1240d270] Running
	I1101 00:19:11.280579   30437 system_pods.go:89] "kindnet-ldrkn" [3d2ad5a0-69f9-4bd2-8bd8-503b7f7602a9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 00:19:11.280583   30437 system_pods.go:89] "kube-apiserver-multinode-600483" [bd94a63a-62c2-4654-aaf0-2e9df086b168] Running
	I1101 00:19:11.280589   30437 system_pods.go:89] "kube-controller-manager-multinode-600483" [9dd41877-c6ea-4591-90e1-632a234ffcf6] Running
	I1101 00:19:11.280593   30437 system_pods.go:89] "kube-proxy-7kvtf" [e2101b7f-e517-4100-905d-f46517e68255] Running
	I1101 00:19:11.280599   30437 system_pods.go:89] "kube-proxy-84g2n" [a98efae3-9303-43be-a139-d21a5630c6b8] Running
	I1101 00:19:11.280603   30437 system_pods.go:89] "kube-proxy-tq28b" [9534d8b8-4536-4a0a-8af5-440e6871a85f] Running
	I1101 00:19:11.280610   30437 system_pods.go:89] "kube-scheduler-multinode-600483" [9cdd0be5-035a-49f5-8796-831ebde28bf0] Running
	I1101 00:19:11.280614   30437 system_pods.go:89] "storage-provisioner" [a67f136b-7645-4eb9-9568-52e3ab06d66e] Running
	I1101 00:19:11.280620   30437 system_pods.go:126] duration metric: took 205.700453ms to wait for k8s-apps to be running ...
	I1101 00:19:11.280634   30437 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 00:19:11.280675   30437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 00:19:11.294815   30437 system_svc.go:56] duration metric: took 14.173552ms WaitForService to wait for kubelet.
	I1101 00:19:11.294839   30437 kubeadm.go:581] duration metric: took 9.599240363s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1101 00:19:11.294856   30437 node_conditions.go:102] verifying NodePressure condition ...
	I1101 00:19:11.471326   30437 request.go:629] Waited for 176.404173ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/nodes
	I1101 00:19:11.471431   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes
	I1101 00:19:11.471440   30437 round_trippers.go:469] Request Headers:
	I1101 00:19:11.471451   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:19:11.471464   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:19:11.474478   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:19:11.474498   30437 round_trippers.go:577] Response Headers:
	I1101 00:19:11.474505   30437 round_trippers.go:580]     Audit-Id: bb1ac45c-5f43-40a0-b064-4877941acdaf
	I1101 00:19:11.474511   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:19:11.474516   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:19:11.474521   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:19:11.474526   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:19:11.474532   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:19:11 GMT
	I1101 00:19:11.474975   30437 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"856"},"items":[{"metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"824","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"manage
dFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1"," [truncated 15081 chars]
	I1101 00:19:11.475514   30437 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 00:19:11.475531   30437 node_conditions.go:123] node cpu capacity is 2
	I1101 00:19:11.475539   30437 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 00:19:11.475543   30437 node_conditions.go:123] node cpu capacity is 2
	I1101 00:19:11.475547   30437 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 00:19:11.475553   30437 node_conditions.go:123] node cpu capacity is 2
	I1101 00:19:11.475556   30437 node_conditions.go:105] duration metric: took 180.696841ms to run NodePressure ...
	I1101 00:19:11.475580   30437 start.go:228] waiting for startup goroutines ...
	I1101 00:19:11.475587   30437 start.go:233] waiting for cluster config update ...
	I1101 00:19:11.475617   30437 start.go:242] writing updated cluster config ...
	I1101 00:19:11.476088   30437 config.go:182] Loaded profile config "multinode-600483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:19:11.476172   30437 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/config.json ...
	I1101 00:19:11.479381   30437 out.go:177] * Starting worker node multinode-600483-m02 in cluster multinode-600483
	I1101 00:19:11.480659   30437 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 00:19:11.480682   30437 cache.go:56] Caching tarball of preloaded images
	I1101 00:19:11.480775   30437 preload.go:174] Found /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 00:19:11.480787   30437 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1101 00:19:11.480891   30437 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/config.json ...
	I1101 00:19:11.481053   30437 start.go:365] acquiring machines lock for multinode-600483-m02: {Name:mk7aad88408c319111b9be8e59d9593a9e88374b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 00:19:11.481093   30437 start.go:369] acquired machines lock for "multinode-600483-m02" in 21.72µs
	I1101 00:19:11.481106   30437 start.go:96] Skipping create...Using existing machine configuration
	I1101 00:19:11.481110   30437 fix.go:54] fixHost starting: m02
	I1101 00:19:11.481354   30437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1101 00:19:11.481382   30437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:19:11.496674   30437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45497
	I1101 00:19:11.497187   30437 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:19:11.497654   30437 main.go:141] libmachine: Using API Version  1
	I1101 00:19:11.497668   30437 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:19:11.497984   30437 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:19:11.498190   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .DriverName
	I1101 00:19:11.498351   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetState
	I1101 00:19:11.500328   30437 fix.go:102] recreateIfNeeded on multinode-600483-m02: state=Running err=<nil>
	W1101 00:19:11.500352   30437 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 00:19:11.502362   30437 out.go:177] * Updating the running kvm2 "multinode-600483-m02" VM ...
	I1101 00:19:11.503892   30437 machine.go:88] provisioning docker machine ...
	I1101 00:19:11.503915   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .DriverName
	I1101 00:19:11.504217   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetMachineName
	I1101 00:19:11.504392   30437 buildroot.go:166] provisioning hostname "multinode-600483-m02"
	I1101 00:19:11.504412   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetMachineName
	I1101 00:19:11.504565   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHHostname
	I1101 00:19:11.507122   30437 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:19:11.507579   30437 main.go:141] libmachine: (multinode-600483-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cb:5d", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:09:06 +0000 UTC Type:0 Mac:52:54:00:07:cb:5d Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-600483-m02 Clientid:01:52:54:00:07:cb:5d}
	I1101 00:19:11.507602   30437 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:19:11.507795   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHPort
	I1101 00:19:11.507993   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHKeyPath
	I1101 00:19:11.508164   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHKeyPath
	I1101 00:19:11.508317   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHUsername
	I1101 00:19:11.508490   30437 main.go:141] libmachine: Using SSH client type: native
	I1101 00:19:11.508939   30437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1101 00:19:11.508961   30437 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-600483-m02 && echo "multinode-600483-m02" | sudo tee /etc/hostname
	I1101 00:19:11.643386   30437 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-600483-m02
	
	I1101 00:19:11.643413   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHHostname
	I1101 00:19:11.647277   30437 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:19:11.647996   30437 main.go:141] libmachine: (multinode-600483-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cb:5d", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:09:06 +0000 UTC Type:0 Mac:52:54:00:07:cb:5d Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-600483-m02 Clientid:01:52:54:00:07:cb:5d}
	I1101 00:19:11.648034   30437 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:19:11.648246   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHPort
	I1101 00:19:11.648530   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHKeyPath
	I1101 00:19:11.648713   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHKeyPath
	I1101 00:19:11.648896   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHUsername
	I1101 00:19:11.649080   30437 main.go:141] libmachine: Using SSH client type: native
	I1101 00:19:11.649379   30437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1101 00:19:11.649397   30437 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-600483-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-600483-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-600483-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 00:19:11.761720   30437 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 00:19:11.761755   30437 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7305/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7305/.minikube}
	I1101 00:19:11.761775   30437 buildroot.go:174] setting up certificates
	I1101 00:19:11.761813   30437 provision.go:83] configureAuth start
	I1101 00:19:11.761825   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetMachineName
	I1101 00:19:11.762102   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetIP
	I1101 00:19:11.764950   30437 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:19:11.765684   30437 main.go:141] libmachine: (multinode-600483-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cb:5d", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:09:06 +0000 UTC Type:0 Mac:52:54:00:07:cb:5d Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-600483-m02 Clientid:01:52:54:00:07:cb:5d}
	I1101 00:19:11.765734   30437 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:19:11.766149   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHHostname
	I1101 00:19:11.769416   30437 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:19:11.770168   30437 main.go:141] libmachine: (multinode-600483-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cb:5d", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:09:06 +0000 UTC Type:0 Mac:52:54:00:07:cb:5d Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-600483-m02 Clientid:01:52:54:00:07:cb:5d}
	I1101 00:19:11.770219   30437 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:19:11.770381   30437 provision.go:138] copyHostCerts
	I1101 00:19:11.770422   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 00:19:11.770465   30437 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem, removing ...
	I1101 00:19:11.770478   30437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 00:19:11.770566   30437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem (1123 bytes)
	I1101 00:19:11.770656   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 00:19:11.770680   30437 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem, removing ...
	I1101 00:19:11.770687   30437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 00:19:11.770725   30437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem (1675 bytes)
	I1101 00:19:11.770777   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 00:19:11.770799   30437 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem, removing ...
	I1101 00:19:11.770809   30437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 00:19:11.770843   30437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem (1078 bytes)
	I1101 00:19:11.770910   30437 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem org=jenkins.multinode-600483-m02 san=[192.168.39.109 192.168.39.109 localhost 127.0.0.1 minikube multinode-600483-m02]
	I1101 00:19:12.053531   30437 provision.go:172] copyRemoteCerts
	I1101 00:19:12.053591   30437 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 00:19:12.053627   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHHostname
	I1101 00:19:12.056919   30437 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:19:12.057235   30437 main.go:141] libmachine: (multinode-600483-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cb:5d", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:09:06 +0000 UTC Type:0 Mac:52:54:00:07:cb:5d Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-600483-m02 Clientid:01:52:54:00:07:cb:5d}
	I1101 00:19:12.057264   30437 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:19:12.057400   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHPort
	I1101 00:19:12.057604   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHKeyPath
	I1101 00:19:12.057747   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHUsername
	I1101 00:19:12.057888   30437 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483-m02/id_rsa Username:docker}
	I1101 00:19:12.141954   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1101 00:19:12.142073   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 00:19:12.169615   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1101 00:19:12.169699   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1101 00:19:12.196527   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1101 00:19:12.196613   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 00:19:12.220733   30437 provision.go:86] duration metric: configureAuth took 458.907912ms
	I1101 00:19:12.220760   30437 buildroot.go:189] setting minikube options for container-runtime
	I1101 00:19:12.220979   30437 config.go:182] Loaded profile config "multinode-600483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:19:12.221081   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHHostname
	I1101 00:19:12.223985   30437 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:19:12.224326   30437 main.go:141] libmachine: (multinode-600483-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cb:5d", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:09:06 +0000 UTC Type:0 Mac:52:54:00:07:cb:5d Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-600483-m02 Clientid:01:52:54:00:07:cb:5d}
	I1101 00:19:12.224359   30437 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:19:12.224546   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHPort
	I1101 00:19:12.224795   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHKeyPath
	I1101 00:19:12.224966   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHKeyPath
	I1101 00:19:12.225145   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHUsername
	I1101 00:19:12.225325   30437 main.go:141] libmachine: Using SSH client type: native
	I1101 00:19:12.225629   30437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1101 00:19:12.225645   30437 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 00:20:42.939364   30437 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 00:20:42.939392   30437 machine.go:91] provisioned docker machine in 1m31.435485874s
	I1101 00:20:42.939406   30437 start.go:300] post-start starting for "multinode-600483-m02" (driver="kvm2")
	I1101 00:20:42.939420   30437 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 00:20:42.939443   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .DriverName
	I1101 00:20:42.939738   30437 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 00:20:42.939768   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHHostname
	I1101 00:20:42.942699   30437 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:20:42.943058   30437 main.go:141] libmachine: (multinode-600483-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cb:5d", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:09:06 +0000 UTC Type:0 Mac:52:54:00:07:cb:5d Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-600483-m02 Clientid:01:52:54:00:07:cb:5d}
	I1101 00:20:42.943092   30437 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:20:42.943283   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHPort
	I1101 00:20:42.943450   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHKeyPath
	I1101 00:20:42.943595   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHUsername
	I1101 00:20:42.943752   30437 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483-m02/id_rsa Username:docker}
	I1101 00:20:43.054064   30437 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 00:20:43.059978   30437 command_runner.go:130] > NAME=Buildroot
	I1101 00:20:43.060010   30437 command_runner.go:130] > VERSION=2021.02.12-1-g0cee705-dirty
	I1101 00:20:43.060021   30437 command_runner.go:130] > ID=buildroot
	I1101 00:20:43.060029   30437 command_runner.go:130] > VERSION_ID=2021.02.12
	I1101 00:20:43.060037   30437 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1101 00:20:43.060104   30437 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 00:20:43.060132   30437 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/addons for local assets ...
	I1101 00:20:43.060210   30437 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/files for local assets ...
	I1101 00:20:43.060299   30437 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> 145042.pem in /etc/ssl/certs
	I1101 00:20:43.060312   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> /etc/ssl/certs/145042.pem
	I1101 00:20:43.060427   30437 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 00:20:43.078520   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /etc/ssl/certs/145042.pem (1708 bytes)
	I1101 00:20:43.121892   30437 start.go:303] post-start completed in 182.469376ms
	I1101 00:20:43.121916   30437 fix.go:56] fixHost completed within 1m31.640804357s
	I1101 00:20:43.121939   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHHostname
	I1101 00:20:43.125583   30437 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:20:43.126134   30437 main.go:141] libmachine: (multinode-600483-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cb:5d", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:09:06 +0000 UTC Type:0 Mac:52:54:00:07:cb:5d Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-600483-m02 Clientid:01:52:54:00:07:cb:5d}
	I1101 00:20:43.126204   30437 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:20:43.126432   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHPort
	I1101 00:20:43.126669   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHKeyPath
	I1101 00:20:43.126823   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHKeyPath
	I1101 00:20:43.127024   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHUsername
	I1101 00:20:43.127217   30437 main.go:141] libmachine: Using SSH client type: native
	I1101 00:20:43.127655   30437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1101 00:20:43.127698   30437 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1101 00:20:43.268900   30437 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698798043.260884165
	
	I1101 00:20:43.268918   30437 fix.go:206] guest clock: 1698798043.260884165
	I1101 00:20:43.268928   30437 fix.go:219] Guest: 2023-11-01 00:20:43.260884165 +0000 UTC Remote: 2023-11-01 00:20:43.121920231 +0000 UTC m=+450.193286978 (delta=138.963934ms)
	I1101 00:20:43.268958   30437 fix.go:190] guest clock delta is within tolerance: 138.963934ms
	I1101 00:20:43.268965   30437 start.go:83] releasing machines lock for "multinode-600483-m02", held for 1m31.787862509s
	I1101 00:20:43.269000   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .DriverName
	I1101 00:20:43.269260   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetIP
	I1101 00:20:43.271908   30437 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:20:43.272349   30437 main.go:141] libmachine: (multinode-600483-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cb:5d", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:09:06 +0000 UTC Type:0 Mac:52:54:00:07:cb:5d Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-600483-m02 Clientid:01:52:54:00:07:cb:5d}
	I1101 00:20:43.272383   30437 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:20:43.274613   30437 out.go:177] * Found network options:
	I1101 00:20:43.276132   30437 out.go:177]   - NO_PROXY=192.168.39.130
	W1101 00:20:43.277589   30437 proxy.go:119] fail to check proxy env: Error ip not in block
	I1101 00:20:43.277651   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .DriverName
	I1101 00:20:43.278346   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .DriverName
	I1101 00:20:43.278531   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .DriverName
	I1101 00:20:43.278609   30437 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 00:20:43.278653   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHHostname
	W1101 00:20:43.278743   30437 proxy.go:119] fail to check proxy env: Error ip not in block
	I1101 00:20:43.278822   30437 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 00:20:43.278844   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHHostname
	I1101 00:20:43.281684   30437 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:20:43.281839   30437 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:20:43.282056   30437 main.go:141] libmachine: (multinode-600483-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cb:5d", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:09:06 +0000 UTC Type:0 Mac:52:54:00:07:cb:5d Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-600483-m02 Clientid:01:52:54:00:07:cb:5d}
	I1101 00:20:43.282092   30437 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:20:43.282200   30437 main.go:141] libmachine: (multinode-600483-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cb:5d", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:09:06 +0000 UTC Type:0 Mac:52:54:00:07:cb:5d Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-600483-m02 Clientid:01:52:54:00:07:cb:5d}
	I1101 00:20:43.282231   30437 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:20:43.282275   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHPort
	I1101 00:20:43.282486   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHPort
	I1101 00:20:43.282503   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHKeyPath
	I1101 00:20:43.282672   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHUsername
	I1101 00:20:43.282677   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHKeyPath
	I1101 00:20:43.282853   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHUsername
	I1101 00:20:43.282863   30437 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483-m02/id_rsa Username:docker}
	I1101 00:20:43.283001   30437 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483-m02/id_rsa Username:docker}
	I1101 00:20:43.524109   30437 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1101 00:20:43.524203   30437 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1101 00:20:43.531775   30437 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1101 00:20:43.531828   30437 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 00:20:43.531875   30437 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 00:20:43.541810   30437 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 00:20:43.541839   30437 start.go:472] detecting cgroup driver to use...
	I1101 00:20:43.541898   30437 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 00:20:43.556354   30437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 00:20:43.570703   30437 docker.go:204] disabling cri-docker service (if available) ...
	I1101 00:20:43.570781   30437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 00:20:43.585273   30437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 00:20:43.604153   30437 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 00:20:43.747111   30437 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 00:20:43.888864   30437 docker.go:220] disabling docker service ...
	I1101 00:20:43.888931   30437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 00:20:43.906507   30437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 00:20:43.920695   30437 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 00:20:44.049062   30437 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 00:20:44.184290   30437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 00:20:44.206834   30437 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 00:20:44.224903   30437 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1101 00:20:44.224938   30437 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 00:20:44.224983   30437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:20:44.235080   30437 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 00:20:44.235170   30437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:20:44.246525   30437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:20:44.258239   30437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:20:44.269681   30437 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 00:20:44.281478   30437 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 00:20:44.297397   30437 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1101 00:20:44.297532   30437 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 00:20:44.306783   30437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:20:44.453293   30437 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 00:20:46.275190   30437 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.821860838s)
	I1101 00:20:46.275217   30437 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 00:20:46.275276   30437 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 00:20:46.284261   30437 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1101 00:20:46.284286   30437 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1101 00:20:46.284297   30437 command_runner.go:130] > Device: 16h/22d	Inode: 1228        Links: 1
	I1101 00:20:46.284307   30437 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1101 00:20:46.284316   30437 command_runner.go:130] > Access: 2023-11-01 00:20:46.173267525 +0000
	I1101 00:20:46.284326   30437 command_runner.go:130] > Modify: 2023-11-01 00:20:46.173267525 +0000
	I1101 00:20:46.284336   30437 command_runner.go:130] > Change: 2023-11-01 00:20:46.173267525 +0000
	I1101 00:20:46.284343   30437 command_runner.go:130] >  Birth: -
	I1101 00:20:46.284375   30437 start.go:540] Will wait 60s for crictl version
	I1101 00:20:46.284422   30437 ssh_runner.go:195] Run: which crictl
	I1101 00:20:46.288171   30437 command_runner.go:130] > /usr/bin/crictl
	I1101 00:20:46.288240   30437 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 00:20:46.324401   30437 command_runner.go:130] > Version:  0.1.0
	I1101 00:20:46.324426   30437 command_runner.go:130] > RuntimeName:  cri-o
	I1101 00:20:46.324434   30437 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1101 00:20:46.324442   30437 command_runner.go:130] > RuntimeApiVersion:  v1
	I1101 00:20:46.325485   30437 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1101 00:20:46.325562   30437 ssh_runner.go:195] Run: crio --version
	I1101 00:20:46.372448   30437 command_runner.go:130] > crio version 1.24.1
	I1101 00:20:46.372475   30437 command_runner.go:130] > Version:          1.24.1
	I1101 00:20:46.372486   30437 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1101 00:20:46.372494   30437 command_runner.go:130] > GitTreeState:     dirty
	I1101 00:20:46.372503   30437 command_runner.go:130] > BuildDate:        2023-10-31T22:57:11Z
	I1101 00:20:46.372510   30437 command_runner.go:130] > GoVersion:        go1.19.9
	I1101 00:20:46.372516   30437 command_runner.go:130] > Compiler:         gc
	I1101 00:20:46.372523   30437 command_runner.go:130] > Platform:         linux/amd64
	I1101 00:20:46.372532   30437 command_runner.go:130] > Linkmode:         dynamic
	I1101 00:20:46.372540   30437 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1101 00:20:46.372548   30437 command_runner.go:130] > SeccompEnabled:   true
	I1101 00:20:46.372552   30437 command_runner.go:130] > AppArmorEnabled:  false
	I1101 00:20:46.372634   30437 ssh_runner.go:195] Run: crio --version
	I1101 00:20:46.416079   30437 command_runner.go:130] > crio version 1.24.1
	I1101 00:20:46.416105   30437 command_runner.go:130] > Version:          1.24.1
	I1101 00:20:46.416116   30437 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1101 00:20:46.416127   30437 command_runner.go:130] > GitTreeState:     dirty
	I1101 00:20:46.416139   30437 command_runner.go:130] > BuildDate:        2023-10-31T22:57:11Z
	I1101 00:20:46.416147   30437 command_runner.go:130] > GoVersion:        go1.19.9
	I1101 00:20:46.416154   30437 command_runner.go:130] > Compiler:         gc
	I1101 00:20:46.416162   30437 command_runner.go:130] > Platform:         linux/amd64
	I1101 00:20:46.416170   30437 command_runner.go:130] > Linkmode:         dynamic
	I1101 00:20:46.416184   30437 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1101 00:20:46.416196   30437 command_runner.go:130] > SeccompEnabled:   true
	I1101 00:20:46.416202   30437 command_runner.go:130] > AppArmorEnabled:  false
	I1101 00:20:46.418315   30437 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1101 00:20:46.419901   30437 out.go:177]   - env NO_PROXY=192.168.39.130
	I1101 00:20:46.421447   30437 main.go:141] libmachine: (multinode-600483-m02) Calling .GetIP
	I1101 00:20:46.424132   30437 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:20:46.424477   30437 main.go:141] libmachine: (multinode-600483-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cb:5d", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:09:06 +0000 UTC Type:0 Mac:52:54:00:07:cb:5d Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-600483-m02 Clientid:01:52:54:00:07:cb:5d}
	I1101 00:20:46.424500   30437 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:20:46.424683   30437 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1101 00:20:46.428635   30437 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1101 00:20:46.428814   30437 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483 for IP: 192.168.39.109
	I1101 00:20:46.428845   30437 certs.go:190] acquiring lock for shared ca certs: {Name:mk6185b3938c4b51e80f57b7c81926c81b632b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:20:46.428991   30437 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key
	I1101 00:20:46.429041   30437 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key
	I1101 00:20:46.429053   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1101 00:20:46.429071   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1101 00:20:46.429083   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1101 00:20:46.429099   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1101 00:20:46.429144   30437 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem (1338 bytes)
	W1101 00:20:46.429172   30437 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504_empty.pem, impossibly tiny 0 bytes
	I1101 00:20:46.429182   30437 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 00:20:46.429203   30437 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem (1078 bytes)
	I1101 00:20:46.429226   30437 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem (1123 bytes)
	I1101 00:20:46.429248   30437 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem (1675 bytes)
	I1101 00:20:46.429294   30437 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem (1708 bytes)
	I1101 00:20:46.429324   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> /usr/share/ca-certificates/145042.pem
	I1101 00:20:46.429344   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:20:46.429361   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem -> /usr/share/ca-certificates/14504.pem
	I1101 00:20:46.429808   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 00:20:46.452435   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 00:20:46.474360   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 00:20:46.496866   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 00:20:46.520014   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /usr/share/ca-certificates/145042.pem (1708 bytes)
	I1101 00:20:46.543262   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 00:20:46.565561   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem --> /usr/share/ca-certificates/14504.pem (1338 bytes)
	I1101 00:20:46.588830   30437 ssh_runner.go:195] Run: openssl version
	I1101 00:20:46.594532   30437 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1101 00:20:46.594627   30437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145042.pem && ln -fs /usr/share/ca-certificates/145042.pem /etc/ssl/certs/145042.pem"
	I1101 00:20:46.604941   30437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145042.pem
	I1101 00:20:46.609846   30437 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 31 23:54 /usr/share/ca-certificates/145042.pem
	I1101 00:20:46.609871   30437 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:54 /usr/share/ca-certificates/145042.pem
	I1101 00:20:46.609911   30437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145042.pem
	I1101 00:20:46.615590   30437 command_runner.go:130] > 3ec20f2e
	I1101 00:20:46.615657   30437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145042.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 00:20:46.624151   30437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 00:20:46.634738   30437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:20:46.639474   30437 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:20:46.639618   30437 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:20:46.639681   30437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:20:46.644820   30437 command_runner.go:130] > b5213941
	I1101 00:20:46.645127   30437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 00:20:46.654181   30437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14504.pem && ln -fs /usr/share/ca-certificates/14504.pem /etc/ssl/certs/14504.pem"
	I1101 00:20:46.664517   30437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14504.pem
	I1101 00:20:46.669072   30437 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 31 23:54 /usr/share/ca-certificates/14504.pem
	I1101 00:20:46.669097   30437 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:54 /usr/share/ca-certificates/14504.pem
	I1101 00:20:46.669143   30437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14504.pem
	I1101 00:20:46.674455   30437 command_runner.go:130] > 51391683
	I1101 00:20:46.674669   30437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14504.pem /etc/ssl/certs/51391683.0"
	I1101 00:20:46.683564   30437 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 00:20:46.687553   30437 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1101 00:20:46.687690   30437 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1101 00:20:46.687812   30437 ssh_runner.go:195] Run: crio config
	I1101 00:20:46.736513   30437 command_runner.go:130] ! time="2023-11-01 00:20:46.728515095Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1101 00:20:46.736540   30437 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1101 00:20:46.745981   30437 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1101 00:20:46.746003   30437 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1101 00:20:46.746010   30437 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1101 00:20:46.746013   30437 command_runner.go:130] > #
	I1101 00:20:46.746020   30437 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1101 00:20:46.746026   30437 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1101 00:20:46.746038   30437 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1101 00:20:46.746047   30437 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1101 00:20:46.746054   30437 command_runner.go:130] > # reload'.
	I1101 00:20:46.746063   30437 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1101 00:20:46.746074   30437 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1101 00:20:46.746089   30437 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1101 00:20:46.746108   30437 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1101 00:20:46.746115   30437 command_runner.go:130] > [crio]
	I1101 00:20:46.746124   30437 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1101 00:20:46.746132   30437 command_runner.go:130] > # containers images, in this directory.
	I1101 00:20:46.746137   30437 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1101 00:20:46.746147   30437 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1101 00:20:46.746154   30437 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1101 00:20:46.746160   30437 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1101 00:20:46.746170   30437 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1101 00:20:46.746174   30437 command_runner.go:130] > storage_driver = "overlay"
	I1101 00:20:46.746183   30437 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1101 00:20:46.746189   30437 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1101 00:20:46.746195   30437 command_runner.go:130] > storage_option = [
	I1101 00:20:46.746200   30437 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1101 00:20:46.746206   30437 command_runner.go:130] > ]
	I1101 00:20:46.746213   30437 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1101 00:20:46.746222   30437 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1101 00:20:46.746229   30437 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1101 00:20:46.746235   30437 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1101 00:20:46.746243   30437 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1101 00:20:46.746247   30437 command_runner.go:130] > # always happen on a node reboot
	I1101 00:20:46.746255   30437 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1101 00:20:46.746260   30437 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1101 00:20:46.746267   30437 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1101 00:20:46.746282   30437 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1101 00:20:46.746291   30437 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1101 00:20:46.746298   30437 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1101 00:20:46.746305   30437 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1101 00:20:46.746312   30437 command_runner.go:130] > # internal_wipe = true
	I1101 00:20:46.746317   30437 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1101 00:20:46.746326   30437 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1101 00:20:46.746332   30437 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1101 00:20:46.746338   30437 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1101 00:20:46.746345   30437 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1101 00:20:46.746351   30437 command_runner.go:130] > [crio.api]
	I1101 00:20:46.746356   30437 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1101 00:20:46.746362   30437 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1101 00:20:46.746368   30437 command_runner.go:130] > # IP address on which the stream server will listen.
	I1101 00:20:46.746375   30437 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1101 00:20:46.746382   30437 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1101 00:20:46.746389   30437 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1101 00:20:46.746396   30437 command_runner.go:130] > # stream_port = "0"
	I1101 00:20:46.746404   30437 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1101 00:20:46.746408   30437 command_runner.go:130] > # stream_enable_tls = false
	I1101 00:20:46.746417   30437 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1101 00:20:46.746423   30437 command_runner.go:130] > # stream_idle_timeout = ""
	I1101 00:20:46.746430   30437 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1101 00:20:46.746439   30437 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1101 00:20:46.746443   30437 command_runner.go:130] > # minutes.
	I1101 00:20:46.746447   30437 command_runner.go:130] > # stream_tls_cert = ""
	I1101 00:20:46.746455   30437 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1101 00:20:46.746461   30437 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1101 00:20:46.746468   30437 command_runner.go:130] > # stream_tls_key = ""
	I1101 00:20:46.746475   30437 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1101 00:20:46.746483   30437 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1101 00:20:46.746489   30437 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1101 00:20:46.746495   30437 command_runner.go:130] > # stream_tls_ca = ""
	I1101 00:20:46.746503   30437 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1101 00:20:46.746510   30437 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1101 00:20:46.746518   30437 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1101 00:20:46.746525   30437 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1101 00:20:46.746540   30437 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1101 00:20:46.746548   30437 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1101 00:20:46.746553   30437 command_runner.go:130] > [crio.runtime]
	I1101 00:20:46.746560   30437 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1101 00:20:46.746568   30437 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1101 00:20:46.746574   30437 command_runner.go:130] > # "nofile=1024:2048"
	I1101 00:20:46.746582   30437 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1101 00:20:46.746586   30437 command_runner.go:130] > # default_ulimits = [
	I1101 00:20:46.746593   30437 command_runner.go:130] > # ]
	I1101 00:20:46.746599   30437 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1101 00:20:46.746605   30437 command_runner.go:130] > # no_pivot = false
	I1101 00:20:46.746611   30437 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1101 00:20:46.746619   30437 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1101 00:20:46.746624   30437 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1101 00:20:46.746632   30437 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1101 00:20:46.746637   30437 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1101 00:20:46.746646   30437 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1101 00:20:46.746651   30437 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1101 00:20:46.746655   30437 command_runner.go:130] > # Cgroup setting for conmon
	I1101 00:20:46.746664   30437 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1101 00:20:46.746670   30437 command_runner.go:130] > conmon_cgroup = "pod"
	I1101 00:20:46.746676   30437 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1101 00:20:46.746683   30437 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1101 00:20:46.746691   30437 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1101 00:20:46.746697   30437 command_runner.go:130] > conmon_env = [
	I1101 00:20:46.746703   30437 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1101 00:20:46.746709   30437 command_runner.go:130] > ]
	I1101 00:20:46.746714   30437 command_runner.go:130] > # Additional environment variables to set for all the
	I1101 00:20:46.746722   30437 command_runner.go:130] > # containers. These are overridden if set in the
	I1101 00:20:46.746730   30437 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1101 00:20:46.746734   30437 command_runner.go:130] > # default_env = [
	I1101 00:20:46.746740   30437 command_runner.go:130] > # ]
	I1101 00:20:46.746745   30437 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1101 00:20:46.746749   30437 command_runner.go:130] > # selinux = false
	I1101 00:20:46.746755   30437 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1101 00:20:46.746762   30437 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1101 00:20:46.746769   30437 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1101 00:20:46.746774   30437 command_runner.go:130] > # seccomp_profile = ""
	I1101 00:20:46.746779   30437 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1101 00:20:46.746787   30437 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1101 00:20:46.746793   30437 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1101 00:20:46.746803   30437 command_runner.go:130] > # which might increase security.
	I1101 00:20:46.746811   30437 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1101 00:20:46.746817   30437 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1101 00:20:46.746826   30437 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1101 00:20:46.746832   30437 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1101 00:20:46.746840   30437 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1101 00:20:46.746847   30437 command_runner.go:130] > # This option supports live configuration reload.
	I1101 00:20:46.746854   30437 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1101 00:20:46.746860   30437 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1101 00:20:46.746867   30437 command_runner.go:130] > # the cgroup blockio controller.
	I1101 00:20:46.746871   30437 command_runner.go:130] > # blockio_config_file = ""
	I1101 00:20:46.746880   30437 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1101 00:20:46.746884   30437 command_runner.go:130] > # irqbalance daemon.
	I1101 00:20:46.746890   30437 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1101 00:20:46.746898   30437 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1101 00:20:46.746903   30437 command_runner.go:130] > # This option supports live configuration reload.
	I1101 00:20:46.746910   30437 command_runner.go:130] > # rdt_config_file = ""
	I1101 00:20:46.746916   30437 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1101 00:20:46.746923   30437 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1101 00:20:46.746929   30437 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1101 00:20:46.746936   30437 command_runner.go:130] > # separate_pull_cgroup = ""
	I1101 00:20:46.746943   30437 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1101 00:20:46.746949   30437 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1101 00:20:46.746955   30437 command_runner.go:130] > # will be added.
	I1101 00:20:46.746960   30437 command_runner.go:130] > # default_capabilities = [
	I1101 00:20:46.746966   30437 command_runner.go:130] > # 	"CHOWN",
	I1101 00:20:46.746971   30437 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1101 00:20:46.746974   30437 command_runner.go:130] > # 	"FSETID",
	I1101 00:20:46.746978   30437 command_runner.go:130] > # 	"FOWNER",
	I1101 00:20:46.746984   30437 command_runner.go:130] > # 	"SETGID",
	I1101 00:20:46.746988   30437 command_runner.go:130] > # 	"SETUID",
	I1101 00:20:46.746994   30437 command_runner.go:130] > # 	"SETPCAP",
	I1101 00:20:46.746999   30437 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1101 00:20:46.747002   30437 command_runner.go:130] > # 	"KILL",
	I1101 00:20:46.747006   30437 command_runner.go:130] > # ]
	I1101 00:20:46.747012   30437 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1101 00:20:46.747021   30437 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1101 00:20:46.747027   30437 command_runner.go:130] > # default_sysctls = [
	I1101 00:20:46.747034   30437 command_runner.go:130] > # ]
	I1101 00:20:46.747041   30437 command_runner.go:130] > # List of devices on the host that a
	I1101 00:20:46.747047   30437 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1101 00:20:46.747051   30437 command_runner.go:130] > # allowed_devices = [
	I1101 00:20:46.747055   30437 command_runner.go:130] > # 	"/dev/fuse",
	I1101 00:20:46.747061   30437 command_runner.go:130] > # ]
	I1101 00:20:46.747067   30437 command_runner.go:130] > # List of additional devices. specified as
	I1101 00:20:46.747075   30437 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1101 00:20:46.747080   30437 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1101 00:20:46.747106   30437 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1101 00:20:46.747114   30437 command_runner.go:130] > # additional_devices = [
	I1101 00:20:46.747117   30437 command_runner.go:130] > # ]
	I1101 00:20:46.747122   30437 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1101 00:20:46.747127   30437 command_runner.go:130] > # cdi_spec_dirs = [
	I1101 00:20:46.747133   30437 command_runner.go:130] > # 	"/etc/cdi",
	I1101 00:20:46.747137   30437 command_runner.go:130] > # 	"/var/run/cdi",
	I1101 00:20:46.747141   30437 command_runner.go:130] > # ]
	I1101 00:20:46.747149   30437 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1101 00:20:46.747155   30437 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1101 00:20:46.747162   30437 command_runner.go:130] > # Defaults to false.
	I1101 00:20:46.747167   30437 command_runner.go:130] > # device_ownership_from_security_context = false
	I1101 00:20:46.747174   30437 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1101 00:20:46.747182   30437 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1101 00:20:46.747186   30437 command_runner.go:130] > # hooks_dir = [
	I1101 00:20:46.747193   30437 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1101 00:20:46.747197   30437 command_runner.go:130] > # ]
	I1101 00:20:46.747203   30437 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1101 00:20:46.747212   30437 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1101 00:20:46.747217   30437 command_runner.go:130] > # its default mounts from the following two files:
	I1101 00:20:46.747223   30437 command_runner.go:130] > #
	I1101 00:20:46.747229   30437 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1101 00:20:46.747237   30437 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1101 00:20:46.747243   30437 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1101 00:20:46.747249   30437 command_runner.go:130] > #
	I1101 00:20:46.747257   30437 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1101 00:20:46.747267   30437 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1101 00:20:46.747273   30437 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1101 00:20:46.747281   30437 command_runner.go:130] > #      only add mounts it finds in this file.
	I1101 00:20:46.747285   30437 command_runner.go:130] > #
	I1101 00:20:46.747292   30437 command_runner.go:130] > # default_mounts_file = ""
	I1101 00:20:46.747297   30437 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1101 00:20:46.747306   30437 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1101 00:20:46.747311   30437 command_runner.go:130] > pids_limit = 1024
	I1101 00:20:46.747319   30437 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1101 00:20:46.747326   30437 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1101 00:20:46.747334   30437 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1101 00:20:46.747342   30437 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1101 00:20:46.747348   30437 command_runner.go:130] > # log_size_max = -1
	I1101 00:20:46.747355   30437 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1101 00:20:46.747360   30437 command_runner.go:130] > # log_to_journald = false
	I1101 00:20:46.747366   30437 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1101 00:20:46.747373   30437 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1101 00:20:46.747378   30437 command_runner.go:130] > # Path to directory for container attach sockets.
	I1101 00:20:46.747386   30437 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1101 00:20:46.747391   30437 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1101 00:20:46.747398   30437 command_runner.go:130] > # bind_mount_prefix = ""
	I1101 00:20:46.747403   30437 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1101 00:20:46.747410   30437 command_runner.go:130] > # read_only = false
	I1101 00:20:46.747416   30437 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1101 00:20:46.747425   30437 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1101 00:20:46.747429   30437 command_runner.go:130] > # live configuration reload.
	I1101 00:20:46.747436   30437 command_runner.go:130] > # log_level = "info"
	I1101 00:20:46.747441   30437 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1101 00:20:46.747448   30437 command_runner.go:130] > # This option supports live configuration reload.
	I1101 00:20:46.747452   30437 command_runner.go:130] > # log_filter = ""
	I1101 00:20:46.747459   30437 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1101 00:20:46.747467   30437 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1101 00:20:46.747471   30437 command_runner.go:130] > # separated by comma.
	I1101 00:20:46.747477   30437 command_runner.go:130] > # uid_mappings = ""
	I1101 00:20:46.747483   30437 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1101 00:20:46.747493   30437 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1101 00:20:46.747500   30437 command_runner.go:130] > # separated by comma.
	I1101 00:20:46.747504   30437 command_runner.go:130] > # gid_mappings = ""
	I1101 00:20:46.747513   30437 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1101 00:20:46.747520   30437 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1101 00:20:46.747528   30437 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1101 00:20:46.747532   30437 command_runner.go:130] > # minimum_mappable_uid = -1
	I1101 00:20:46.747540   30437 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1101 00:20:46.747546   30437 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1101 00:20:46.747554   30437 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1101 00:20:46.747559   30437 command_runner.go:130] > # minimum_mappable_gid = -1
	I1101 00:20:46.747567   30437 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1101 00:20:46.747573   30437 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1101 00:20:46.747580   30437 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1101 00:20:46.747586   30437 command_runner.go:130] > # ctr_stop_timeout = 30
	I1101 00:20:46.747592   30437 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1101 00:20:46.747600   30437 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1101 00:20:46.747606   30437 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1101 00:20:46.747611   30437 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1101 00:20:46.747617   30437 command_runner.go:130] > drop_infra_ctr = false
	I1101 00:20:46.747626   30437 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1101 00:20:46.747631   30437 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1101 00:20:46.747641   30437 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1101 00:20:46.747647   30437 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1101 00:20:46.747653   30437 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1101 00:20:46.747660   30437 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1101 00:20:46.747664   30437 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1101 00:20:46.747674   30437 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1101 00:20:46.747681   30437 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1101 00:20:46.747687   30437 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1101 00:20:46.747696   30437 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1101 00:20:46.747702   30437 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1101 00:20:46.747706   30437 command_runner.go:130] > # default_runtime = "runc"
	I1101 00:20:46.747713   30437 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1101 00:20:46.747721   30437 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1101 00:20:46.747732   30437 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1101 00:20:46.747740   30437 command_runner.go:130] > # creation as a file is not desired either.
	I1101 00:20:46.747748   30437 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1101 00:20:46.747756   30437 command_runner.go:130] > # the hostname is being managed dynamically.
	I1101 00:20:46.747761   30437 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1101 00:20:46.747767   30437 command_runner.go:130] > # ]
	I1101 00:20:46.747773   30437 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1101 00:20:46.747781   30437 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1101 00:20:46.747788   30437 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1101 00:20:46.747796   30437 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1101 00:20:46.747799   30437 command_runner.go:130] > #
	I1101 00:20:46.747804   30437 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1101 00:20:46.747810   30437 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1101 00:20:46.747814   30437 command_runner.go:130] > #  runtime_type = "oci"
	I1101 00:20:46.747820   30437 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1101 00:20:46.747824   30437 command_runner.go:130] > #  privileged_without_host_devices = false
	I1101 00:20:46.747831   30437 command_runner.go:130] > #  allowed_annotations = []
	I1101 00:20:46.747835   30437 command_runner.go:130] > # Where:
	I1101 00:20:46.747840   30437 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1101 00:20:46.747849   30437 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1101 00:20:46.747856   30437 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1101 00:20:46.747865   30437 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1101 00:20:46.747869   30437 command_runner.go:130] > #   in $PATH.
	I1101 00:20:46.747876   30437 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1101 00:20:46.747884   30437 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1101 00:20:46.747890   30437 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1101 00:20:46.747896   30437 command_runner.go:130] > #   state.
	I1101 00:20:46.747902   30437 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1101 00:20:46.747908   30437 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1101 00:20:46.747916   30437 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1101 00:20:46.747922   30437 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1101 00:20:46.747948   30437 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1101 00:20:46.747963   30437 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1101 00:20:46.747968   30437 command_runner.go:130] > #   The currently recognized values are:
	I1101 00:20:46.747976   30437 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1101 00:20:46.747984   30437 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1101 00:20:46.747992   30437 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1101 00:20:46.747999   30437 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1101 00:20:46.748009   30437 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1101 00:20:46.748017   30437 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1101 00:20:46.748027   30437 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1101 00:20:46.748038   30437 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1101 00:20:46.748045   30437 command_runner.go:130] > #   should be moved to the container's cgroup
	I1101 00:20:46.748050   30437 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1101 00:20:46.748060   30437 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1101 00:20:46.748064   30437 command_runner.go:130] > runtime_type = "oci"
	I1101 00:20:46.748069   30437 command_runner.go:130] > runtime_root = "/run/runc"
	I1101 00:20:46.748075   30437 command_runner.go:130] > runtime_config_path = ""
	I1101 00:20:46.748079   30437 command_runner.go:130] > monitor_path = ""
	I1101 00:20:46.748086   30437 command_runner.go:130] > monitor_cgroup = ""
	I1101 00:20:46.748091   30437 command_runner.go:130] > monitor_exec_cgroup = ""
	I1101 00:20:46.748100   30437 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1101 00:20:46.748104   30437 command_runner.go:130] > # running containers
	I1101 00:20:46.748108   30437 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1101 00:20:46.748117   30437 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1101 00:20:46.748145   30437 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1101 00:20:46.748156   30437 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1101 00:20:46.748162   30437 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1101 00:20:46.748166   30437 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1101 00:20:46.748171   30437 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1101 00:20:46.748176   30437 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1101 00:20:46.748181   30437 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1101 00:20:46.748188   30437 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1101 00:20:46.748195   30437 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1101 00:20:46.748202   30437 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1101 00:20:46.748209   30437 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1101 00:20:46.748218   30437 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1101 00:20:46.748228   30437 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1101 00:20:46.748234   30437 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1101 00:20:46.748245   30437 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1101 00:20:46.748255   30437 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1101 00:20:46.748260   30437 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1101 00:20:46.748268   30437 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1101 00:20:46.748273   30437 command_runner.go:130] > # Example:
	I1101 00:20:46.748279   30437 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1101 00:20:46.748284   30437 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1101 00:20:46.748291   30437 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1101 00:20:46.748297   30437 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1101 00:20:46.748303   30437 command_runner.go:130] > # cpuset = 0
	I1101 00:20:46.748307   30437 command_runner.go:130] > # cpushares = "0-1"
	I1101 00:20:46.748313   30437 command_runner.go:130] > # Where:
	I1101 00:20:46.748318   30437 command_runner.go:130] > # The workload name is workload-type.
	I1101 00:20:46.748328   30437 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1101 00:20:46.748333   30437 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1101 00:20:46.748340   30437 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1101 00:20:46.748349   30437 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1101 00:20:46.748358   30437 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1101 00:20:46.748361   30437 command_runner.go:130] > # 
	I1101 00:20:46.748370   30437 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1101 00:20:46.748374   30437 command_runner.go:130] > #
	I1101 00:20:46.748382   30437 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1101 00:20:46.748391   30437 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1101 00:20:46.748399   30437 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1101 00:20:46.748406   30437 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1101 00:20:46.748414   30437 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1101 00:20:46.748418   30437 command_runner.go:130] > [crio.image]
	I1101 00:20:46.748426   30437 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1101 00:20:46.748431   30437 command_runner.go:130] > # default_transport = "docker://"
	I1101 00:20:46.748439   30437 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1101 00:20:46.748446   30437 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1101 00:20:46.748452   30437 command_runner.go:130] > # global_auth_file = ""
	I1101 00:20:46.748458   30437 command_runner.go:130] > # The image used to instantiate infra containers.
	I1101 00:20:46.748463   30437 command_runner.go:130] > # This option supports live configuration reload.
	I1101 00:20:46.748468   30437 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1101 00:20:46.748475   30437 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1101 00:20:46.748483   30437 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1101 00:20:46.748488   30437 command_runner.go:130] > # This option supports live configuration reload.
	I1101 00:20:46.748495   30437 command_runner.go:130] > # pause_image_auth_file = ""
	I1101 00:20:46.748501   30437 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1101 00:20:46.748510   30437 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1101 00:20:46.748516   30437 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1101 00:20:46.748524   30437 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1101 00:20:46.748529   30437 command_runner.go:130] > # pause_command = "/pause"
	I1101 00:20:46.748536   30437 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1101 00:20:46.748542   30437 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1101 00:20:46.748551   30437 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1101 00:20:46.748557   30437 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1101 00:20:46.748565   30437 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1101 00:20:46.748569   30437 command_runner.go:130] > # signature_policy = ""
	I1101 00:20:46.748578   30437 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1101 00:20:46.748585   30437 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1101 00:20:46.748591   30437 command_runner.go:130] > # changing them here.
	I1101 00:20:46.748595   30437 command_runner.go:130] > # insecure_registries = [
	I1101 00:20:46.748601   30437 command_runner.go:130] > # ]
	I1101 00:20:46.748610   30437 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1101 00:20:46.748618   30437 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1101 00:20:46.748623   30437 command_runner.go:130] > # image_volumes = "mkdir"
	I1101 00:20:46.748629   30437 command_runner.go:130] > # Temporary directory to use for storing big files
	I1101 00:20:46.748633   30437 command_runner.go:130] > # big_files_temporary_dir = ""
	I1101 00:20:46.748642   30437 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1101 00:20:46.748646   30437 command_runner.go:130] > # CNI plugins.
	I1101 00:20:46.748649   30437 command_runner.go:130] > [crio.network]
	I1101 00:20:46.748656   30437 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1101 00:20:46.748663   30437 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1101 00:20:46.748669   30437 command_runner.go:130] > # cni_default_network = ""
	I1101 00:20:46.748677   30437 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1101 00:20:46.748681   30437 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1101 00:20:46.748688   30437 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1101 00:20:46.748692   30437 command_runner.go:130] > # plugin_dirs = [
	I1101 00:20:46.748698   30437 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1101 00:20:46.748702   30437 command_runner.go:130] > # ]
	I1101 00:20:46.748710   30437 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1101 00:20:46.748714   30437 command_runner.go:130] > [crio.metrics]
	I1101 00:20:46.748722   30437 command_runner.go:130] > # Globally enable or disable metrics support.
	I1101 00:20:46.748726   30437 command_runner.go:130] > enable_metrics = true
	I1101 00:20:46.748731   30437 command_runner.go:130] > # Specify enabled metrics collectors.
	I1101 00:20:46.748737   30437 command_runner.go:130] > # Per default all metrics are enabled.
	I1101 00:20:46.748743   30437 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1101 00:20:46.748752   30437 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1101 00:20:46.748758   30437 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1101 00:20:46.748765   30437 command_runner.go:130] > # metrics_collectors = [
	I1101 00:20:46.748769   30437 command_runner.go:130] > # 	"operations",
	I1101 00:20:46.748776   30437 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1101 00:20:46.748781   30437 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1101 00:20:46.748785   30437 command_runner.go:130] > # 	"operations_errors",
	I1101 00:20:46.748790   30437 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1101 00:20:46.748797   30437 command_runner.go:130] > # 	"image_pulls_by_name",
	I1101 00:20:46.748801   30437 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1101 00:20:46.748808   30437 command_runner.go:130] > # 	"image_pulls_failures",
	I1101 00:20:46.748812   30437 command_runner.go:130] > # 	"image_pulls_successes",
	I1101 00:20:46.748819   30437 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1101 00:20:46.748823   30437 command_runner.go:130] > # 	"image_layer_reuse",
	I1101 00:20:46.748827   30437 command_runner.go:130] > # 	"containers_oom_total",
	I1101 00:20:46.748834   30437 command_runner.go:130] > # 	"containers_oom",
	I1101 00:20:46.748838   30437 command_runner.go:130] > # 	"processes_defunct",
	I1101 00:20:46.748845   30437 command_runner.go:130] > # 	"operations_total",
	I1101 00:20:46.748849   30437 command_runner.go:130] > # 	"operations_latency_seconds",
	I1101 00:20:46.748854   30437 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1101 00:20:46.748861   30437 command_runner.go:130] > # 	"operations_errors_total",
	I1101 00:20:46.748865   30437 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1101 00:20:46.748873   30437 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1101 00:20:46.748877   30437 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1101 00:20:46.748882   30437 command_runner.go:130] > # 	"image_pulls_success_total",
	I1101 00:20:46.748887   30437 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1101 00:20:46.748894   30437 command_runner.go:130] > # 	"containers_oom_count_total",
	I1101 00:20:46.748898   30437 command_runner.go:130] > # ]
	I1101 00:20:46.748905   30437 command_runner.go:130] > # The port on which the metrics server will listen.
	I1101 00:20:46.748910   30437 command_runner.go:130] > # metrics_port = 9090
	I1101 00:20:46.748917   30437 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1101 00:20:46.748921   30437 command_runner.go:130] > # metrics_socket = ""
	I1101 00:20:46.748926   30437 command_runner.go:130] > # The certificate for the secure metrics server.
	I1101 00:20:46.748936   30437 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1101 00:20:46.748942   30437 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1101 00:20:46.748950   30437 command_runner.go:130] > # certificate on any modification event.
	I1101 00:20:46.748954   30437 command_runner.go:130] > # metrics_cert = ""
	I1101 00:20:46.748961   30437 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1101 00:20:46.748966   30437 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1101 00:20:46.748972   30437 command_runner.go:130] > # metrics_key = ""
	I1101 00:20:46.748978   30437 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1101 00:20:46.748982   30437 command_runner.go:130] > [crio.tracing]
	I1101 00:20:46.748988   30437 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1101 00:20:46.748995   30437 command_runner.go:130] > # enable_tracing = false
	I1101 00:20:46.749000   30437 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1101 00:20:46.749007   30437 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1101 00:20:46.749012   30437 command_runner.go:130] > # Number of samples to collect per million spans.
	I1101 00:20:46.749019   30437 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1101 00:20:46.749025   30437 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1101 00:20:46.749029   30437 command_runner.go:130] > [crio.stats]
	I1101 00:20:46.749040   30437 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1101 00:20:46.749046   30437 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1101 00:20:46.749053   30437 command_runner.go:130] > # stats_collection_period = 0
	I1101 00:20:46.749111   30437 cni.go:84] Creating CNI manager for ""
	I1101 00:20:46.749121   30437 cni.go:136] 3 nodes found, recommending kindnet
	I1101 00:20:46.749128   30437 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 00:20:46.749145   30437 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.109 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-600483 NodeName:multinode-600483-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.130"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.109 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 00:20:46.749247   30437 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.109
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-600483-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.109
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.130"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 00:20:46.749299   30437 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-600483-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.109
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-600483 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 00:20:46.749348   30437 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 00:20:46.759764   30437 command_runner.go:130] > kubeadm
	I1101 00:20:46.759790   30437 command_runner.go:130] > kubectl
	I1101 00:20:46.759795   30437 command_runner.go:130] > kubelet
	I1101 00:20:46.759813   30437 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 00:20:46.759856   30437 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1101 00:20:46.769563   30437 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1101 00:20:46.785654   30437 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 00:20:46.801360   30437 ssh_runner.go:195] Run: grep 192.168.39.130	control-plane.minikube.internal$ /etc/hosts
	I1101 00:20:46.805095   30437 command_runner.go:130] > 192.168.39.130	control-plane.minikube.internal
	I1101 00:20:46.805195   30437 host.go:66] Checking if "multinode-600483" exists ...
	I1101 00:20:46.805468   30437 config.go:182] Loaded profile config "multinode-600483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:20:46.805513   30437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1101 00:20:46.805544   30437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:20:46.822117   30437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46163
	I1101 00:20:46.822571   30437 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:20:46.823052   30437 main.go:141] libmachine: Using API Version  1
	I1101 00:20:46.823075   30437 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:20:46.823375   30437 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:20:46.823561   30437 main.go:141] libmachine: (multinode-600483) Calling .DriverName
	I1101 00:20:46.823704   30437 start.go:304] JoinCluster: &{Name:multinode-600483 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.3 ClusterName:multinode-600483 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.130 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.109 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.2 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false i
ngress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:20:46.823828   30437 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1101 00:20:46.823847   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHHostname
	I1101 00:20:46.826502   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:20:46.826879   30437 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:20:46.826901   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:20:46.827049   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHPort
	I1101 00:20:46.827248   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:20:46.827393   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHUsername
	I1101 00:20:46.827517   30437 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483/id_rsa Username:docker}
	I1101 00:20:47.001332   30437 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token dahawv.6cn5g6qc6cguextv --discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 
	I1101 00:20:47.001393   30437 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.109 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1101 00:20:47.001425   30437 host.go:66] Checking if "multinode-600483" exists ...
	I1101 00:20:47.001886   30437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1101 00:20:47.001921   30437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:20:47.016348   30437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43327
	I1101 00:20:47.016776   30437 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:20:47.017241   30437 main.go:141] libmachine: Using API Version  1
	I1101 00:20:47.017270   30437 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:20:47.017559   30437 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:20:47.017765   30437 main.go:141] libmachine: (multinode-600483) Calling .DriverName
	I1101 00:20:47.017932   30437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl drain multinode-600483-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1101 00:20:47.017957   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHHostname
	I1101 00:20:47.020850   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:20:47.021310   30437 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:20:47.021344   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:20:47.021504   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHPort
	I1101 00:20:47.021672   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:20:47.021847   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHUsername
	I1101 00:20:47.021966   30437 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483/id_rsa Username:docker}
	I1101 00:20:47.211709   30437 command_runner.go:130] > node/multinode-600483-m02 cordoned
	I1101 00:20:50.253823   30437 command_runner.go:130] > pod "busybox-5bc68d56bd-6jjms" has DeletionTimestamp older than 1 seconds, skipping
	I1101 00:20:50.253849   30437 command_runner.go:130] > node/multinode-600483-m02 drained
	I1101 00:20:50.253932   30437 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1101 00:20:50.253947   30437 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-d4f6q, kube-system/kube-proxy-7kvtf
	I1101 00:20:50.253967   30437 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl drain multinode-600483-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.236014779s)
	I1101 00:20:50.253985   30437 node.go:108] successfully drained node "m02"
	I1101 00:20:50.254354   30437 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 00:20:50.254612   30437 kapi.go:59] client config for multinode-600483: &rest.Config{Host:"https://192.168.39.130:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/client.crt", KeyFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/client.key", CAFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 00:20:50.254960   30437 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1101 00:20:50.255024   30437 round_trippers.go:463] DELETE https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m02
	I1101 00:20:50.255035   30437 round_trippers.go:469] Request Headers:
	I1101 00:20:50.255047   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:20:50.255057   30437 round_trippers.go:473]     Content-Type: application/json
	I1101 00:20:50.255070   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:20:50.270530   30437 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1101 00:20:50.270558   30437 round_trippers.go:577] Response Headers:
	I1101 00:20:50.270567   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:20:50.270576   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:20:50.270589   30437 round_trippers.go:580]     Content-Length: 171
	I1101 00:20:50.270598   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:20:50 GMT
	I1101 00:20:50.270606   30437 round_trippers.go:580]     Audit-Id: 4d6769af-bd16-4ab6-a6b5-df7bc4a2748d
	I1101 00:20:50.270615   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:20:50.270623   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:20:50.270662   30437 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-600483-m02","kind":"nodes","uid":"5b2b1f13-2a35-43d5-86a5-bb5c1d6395e1"}}
	I1101 00:20:50.270710   30437 node.go:124] successfully deleted node "m02"
	I1101 00:20:50.270720   30437 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.109 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1101 00:20:50.270743   30437 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.109 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1101 00:20:50.270764   30437 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dahawv.6cn5g6qc6cguextv --discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-600483-m02"
	I1101 00:20:50.350683   30437 command_runner.go:130] > [preflight] Running pre-flight checks
	I1101 00:20:50.538770   30437 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1101 00:20:50.538800   30437 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1101 00:20:50.607990   30437 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 00:20:50.608211   30437 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 00:20:50.608769   30437 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1101 00:20:50.751549   30437 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1101 00:20:51.276708   30437 command_runner.go:130] > This node has joined the cluster:
	I1101 00:20:51.276730   30437 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1101 00:20:51.276737   30437 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1101 00:20:51.276745   30437 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1101 00:20:51.279437   30437 command_runner.go:130] ! W1101 00:20:50.342584    2782 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1101 00:20:51.279466   30437 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1101 00:20:51.279476   30437 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1101 00:20:51.279492   30437 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1101 00:20:51.279514   30437 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dahawv.6cn5g6qc6cguextv --discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-600483-m02": (1.008729059s)
	I1101 00:20:51.279536   30437 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1101 00:20:51.592390   30437 start.go:306] JoinCluster complete in 4.768662188s
	I1101 00:20:51.592432   30437 cni.go:84] Creating CNI manager for ""
	I1101 00:20:51.592440   30437 cni.go:136] 3 nodes found, recommending kindnet
	I1101 00:20:51.592503   30437 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 00:20:51.599028   30437 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1101 00:20:51.599059   30437 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1101 00:20:51.599070   30437 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1101 00:20:51.599081   30437 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1101 00:20:51.599092   30437 command_runner.go:130] > Access: 2023-11-01 00:18:23.441951890 +0000
	I1101 00:20:51.599100   30437 command_runner.go:130] > Modify: 2023-10-31 23:04:20.000000000 +0000
	I1101 00:20:51.599108   30437 command_runner.go:130] > Change: 2023-11-01 00:18:21.588951890 +0000
	I1101 00:20:51.599113   30437 command_runner.go:130] >  Birth: -
	I1101 00:20:51.599219   30437 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1101 00:20:51.599236   30437 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1101 00:20:51.617345   30437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 00:20:51.948657   30437 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1101 00:20:51.953564   30437 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1101 00:20:51.956420   30437 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1101 00:20:51.972659   30437 command_runner.go:130] > daemonset.apps/kindnet configured
	I1101 00:20:51.976076   30437 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 00:20:51.976309   30437 kapi.go:59] client config for multinode-600483: &rest.Config{Host:"https://192.168.39.130:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/client.crt", KeyFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/client.key", CAFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 00:20:51.976582   30437 round_trippers.go:463] GET https://192.168.39.130:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1101 00:20:51.976594   30437 round_trippers.go:469] Request Headers:
	I1101 00:20:51.976605   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:20:51.976614   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:20:51.979515   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:20:51.979532   30437 round_trippers.go:577] Response Headers:
	I1101 00:20:51.979538   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:20:51 GMT
	I1101 00:20:51.979544   30437 round_trippers.go:580]     Audit-Id: 9feb7c4d-e059-4366-93ff-67f95bd60b34
	I1101 00:20:51.979549   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:20:51.979574   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:20:51.979581   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:20:51.979593   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:20:51.979602   30437 round_trippers.go:580]     Content-Length: 291
	I1101 00:20:51.979678   30437 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"21004493-8bb6-43e9-8ba2-65d98d570b24","resourceVersion":"848","creationTimestamp":"2023-11-01T00:08:30Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1101 00:20:51.979771   30437 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-600483" context rescaled to 1 replicas
	I1101 00:20:51.979802   30437 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.109 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1101 00:20:51.982019   30437 out.go:177] * Verifying Kubernetes components...
	I1101 00:20:51.983698   30437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 00:20:52.001646   30437 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 00:20:52.001855   30437 kapi.go:59] client config for multinode-600483: &rest.Config{Host:"https://192.168.39.130:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/client.crt", KeyFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/client.key", CAFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 00:20:52.002067   30437 node_ready.go:35] waiting up to 6m0s for node "multinode-600483-m02" to be "Ready" ...
	I1101 00:20:52.002124   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m02
	I1101 00:20:52.002141   30437 round_trippers.go:469] Request Headers:
	I1101 00:20:52.002151   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:20:52.002160   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:20:52.004970   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:20:52.004994   30437 round_trippers.go:577] Response Headers:
	I1101 00:20:52.005008   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:20:52 GMT
	I1101 00:20:52.005016   30437 round_trippers.go:580]     Audit-Id: 0c5aa1d0-4ddf-4994-9be8-8235f32245a8
	I1101 00:20:52.005024   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:20:52.005031   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:20:52.005042   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:20:52.005052   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:20:52.005290   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483-m02","uid":"36dbbc60-53e2-44a7-8be1-589b70b73c26","resourceVersion":"1012","creationTimestamp":"2023-11-01T00:20:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:20:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}
}}}},{"manager":"kube-controller-manager","operation":"Update","apiVers [truncated 3671 chars]
	I1101 00:20:52.005600   30437 node_ready.go:49] node "multinode-600483-m02" has status "Ready":"True"
	I1101 00:20:52.005617   30437 node_ready.go:38] duration metric: took 3.536312ms waiting for node "multinode-600483-m02" to be "Ready" ...
	I1101 00:20:52.005625   30437 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 00:20:52.005691   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods
	I1101 00:20:52.005700   30437 round_trippers.go:469] Request Headers:
	I1101 00:20:52.005707   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:20:52.005713   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:20:52.009404   30437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:20:52.009435   30437 round_trippers.go:577] Response Headers:
	I1101 00:20:52.009446   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:20:52.009454   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:20:52.009462   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:20:52 GMT
	I1101 00:20:52.009471   30437 round_trippers.go:580]     Audit-Id: 125a6eac-7fde-4d85-b17b-7ec9126b431f
	I1101 00:20:52.009512   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:20:52.009523   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:20:52.011115   30437 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1017"},"items":[{"metadata":{"name":"coredns-5dd5756b68-rpvvn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d8ab0ebb-aa1f-4143-b987-6c1ae065954a","resourceVersion":"833","creationTimestamp":"2023-11-01T00:08:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15779dee-f1e7-4836-aba2-2d57728c2309","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15779dee-f1e7-4836-aba2-2d57728c2309\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82070 chars]
	I1101 00:20:52.013544   30437 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rpvvn" in "kube-system" namespace to be "Ready" ...
	I1101 00:20:52.013623   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rpvvn
	I1101 00:20:52.013635   30437 round_trippers.go:469] Request Headers:
	I1101 00:20:52.013643   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:20:52.013649   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:20:52.016297   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:20:52.016314   30437 round_trippers.go:577] Response Headers:
	I1101 00:20:52.016321   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:20:52.016326   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:20:52.016331   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:20:52 GMT
	I1101 00:20:52.016336   30437 round_trippers.go:580]     Audit-Id: 345eed0b-d7b4-4cd5-bf96-e59172764eaa
	I1101 00:20:52.016341   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:20:52.016346   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:20:52.016475   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rpvvn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d8ab0ebb-aa1f-4143-b987-6c1ae065954a","resourceVersion":"833","creationTimestamp":"2023-11-01T00:08:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15779dee-f1e7-4836-aba2-2d57728c2309","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15779dee-f1e7-4836-aba2-2d57728c2309\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1101 00:20:52.016895   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:20:52.016908   30437 round_trippers.go:469] Request Headers:
	I1101 00:20:52.016917   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:20:52.016926   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:20:52.019448   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:20:52.019475   30437 round_trippers.go:577] Response Headers:
	I1101 00:20:52.019482   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:20:52.019488   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:20:52 GMT
	I1101 00:20:52.019494   30437 round_trippers.go:580]     Audit-Id: 9e5ddb77-9def-435a-8529-d010dc831c14
	I1101 00:20:52.019503   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:20:52.019511   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:20:52.019524   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:20:52.019681   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"865","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 6220 chars]
	I1101 00:20:52.020026   30437 pod_ready.go:92] pod "coredns-5dd5756b68-rpvvn" in "kube-system" namespace has status "Ready":"True"
	I1101 00:20:52.020041   30437 pod_ready.go:81] duration metric: took 6.471989ms waiting for pod "coredns-5dd5756b68-rpvvn" in "kube-system" namespace to be "Ready" ...
	I1101 00:20:52.020053   30437 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:20:52.020108   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-600483
	I1101 00:20:52.020118   30437 round_trippers.go:469] Request Headers:
	I1101 00:20:52.020129   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:20:52.020139   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:20:52.022268   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:20:52.022288   30437 round_trippers.go:577] Response Headers:
	I1101 00:20:52.022297   30437 round_trippers.go:580]     Audit-Id: d412d1c9-ed38-44c7-8447-b5501d3e7317
	I1101 00:20:52.022305   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:20:52.022312   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:20:52.022324   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:20:52.022334   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:20:52.022342   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:20:52 GMT
	I1101 00:20:52.022491   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-600483","namespace":"kube-system","uid":"c612ebac-fa1d-474a-b8cd-5e922a5f76dd","resourceVersion":"827","creationTimestamp":"2023-11-01T00:08:30Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.130:2379","kubernetes.io/config.hash":"5629fb0a0414e85632f97c416152ffbb","kubernetes.io/config.mirror":"5629fb0a0414e85632f97c416152ffbb","kubernetes.io/config.seen":"2023-11-01T00:08:30.293496672Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1101 00:20:52.022929   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:20:52.022938   30437 round_trippers.go:469] Request Headers:
	I1101 00:20:52.022945   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:20:52.022951   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:20:52.025800   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:20:52.025822   30437 round_trippers.go:577] Response Headers:
	I1101 00:20:52.025831   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:20:52.025840   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:20:52.025848   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:20:52 GMT
	I1101 00:20:52.025856   30437 round_trippers.go:580]     Audit-Id: 3f2f088a-c48a-4ab5-a20a-4d509a4c8290
	I1101 00:20:52.025868   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:20:52.025879   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:20:52.026474   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"865","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 6220 chars]
	I1101 00:20:52.026765   30437 pod_ready.go:92] pod "etcd-multinode-600483" in "kube-system" namespace has status "Ready":"True"
	I1101 00:20:52.026779   30437 pod_ready.go:81] duration metric: took 6.71915ms waiting for pod "etcd-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:20:52.026795   30437 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:20:52.026848   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-600483
	I1101 00:20:52.026855   30437 round_trippers.go:469] Request Headers:
	I1101 00:20:52.026862   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:20:52.026869   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:20:52.029170   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:20:52.029191   30437 round_trippers.go:577] Response Headers:
	I1101 00:20:52.029201   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:20:52.029209   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:20:52.029220   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:20:52.029229   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:20:52.029238   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:20:52 GMT
	I1101 00:20:52.029247   30437 round_trippers.go:580]     Audit-Id: 161bc7b6-86a6-44eb-a33e-b9faf7efeffd
	I1101 00:20:52.029409   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-600483","namespace":"kube-system","uid":"bd94a63a-62c2-4654-aaf0-2e9df086b168","resourceVersion":"843","creationTimestamp":"2023-11-01T00:08:30Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.130:8443","kubernetes.io/config.hash":"99a9cda13526c350638742a7c7b2ba52","kubernetes.io/config.mirror":"99a9cda13526c350638742a7c7b2ba52","kubernetes.io/config.seen":"2023-11-01T00:08:30.293497612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1101 00:20:52.029915   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:20:52.029934   30437 round_trippers.go:469] Request Headers:
	I1101 00:20:52.029956   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:20:52.029970   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:20:52.032018   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:20:52.032040   30437 round_trippers.go:577] Response Headers:
	I1101 00:20:52.032048   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:20:52 GMT
	I1101 00:20:52.032056   30437 round_trippers.go:580]     Audit-Id: 0525624e-2322-44c5-8da3-456e3db3da84
	I1101 00:20:52.032073   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:20:52.032084   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:20:52.032091   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:20:52.032102   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:20:52.032277   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"865","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 6220 chars]
	I1101 00:20:52.032657   30437 pod_ready.go:92] pod "kube-apiserver-multinode-600483" in "kube-system" namespace has status "Ready":"True"
	I1101 00:20:52.032674   30437 pod_ready.go:81] duration metric: took 5.865231ms waiting for pod "kube-apiserver-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:20:52.032686   30437 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:20:52.032746   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-600483
	I1101 00:20:52.032757   30437 round_trippers.go:469] Request Headers:
	I1101 00:20:52.032768   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:20:52.032780   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:20:52.034767   30437 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1101 00:20:52.034788   30437 round_trippers.go:577] Response Headers:
	I1101 00:20:52.034797   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:20:52.034806   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:20:52.034813   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:20:52.034821   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:20:52 GMT
	I1101 00:20:52.034844   30437 round_trippers.go:580]     Audit-Id: 937473c8-771e-48d0-82df-6211f6d12b77
	I1101 00:20:52.034859   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:20:52.035088   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-600483","namespace":"kube-system","uid":"9dd41877-c6ea-4591-90e1-632a234ffcf6","resourceVersion":"845","creationTimestamp":"2023-11-01T00:08:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f2b1fcba8b34b1f65e600fae0bd4374a","kubernetes.io/config.mirror":"f2b1fcba8b34b1f65e600fae0bd4374a","kubernetes.io/config.seen":"2023-11-01T00:08:20.448799328Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1101 00:20:52.035604   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:20:52.035618   30437 round_trippers.go:469] Request Headers:
	I1101 00:20:52.035629   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:20:52.035638   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:20:52.037697   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:20:52.037716   30437 round_trippers.go:577] Response Headers:
	I1101 00:20:52.037726   30437 round_trippers.go:580]     Audit-Id: 2d752d35-18b3-44f3-a0ff-1f4f361e3d19
	I1101 00:20:52.037734   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:20:52.037741   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:20:52.037749   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:20:52.037756   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:20:52.037764   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:20:52 GMT
	I1101 00:20:52.038020   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"865","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 6220 chars]
	I1101 00:20:52.038409   30437 pod_ready.go:92] pod "kube-controller-manager-multinode-600483" in "kube-system" namespace has status "Ready":"True"
	I1101 00:20:52.038427   30437 pod_ready.go:81] duration metric: took 5.732287ms waiting for pod "kube-controller-manager-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:20:52.038439   30437 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7kvtf" in "kube-system" namespace to be "Ready" ...
	I1101 00:20:52.202875   30437 request.go:629] Waited for 164.373283ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7kvtf
	I1101 00:20:52.202964   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7kvtf
	I1101 00:20:52.202972   30437 round_trippers.go:469] Request Headers:
	I1101 00:20:52.202983   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:20:52.202993   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:20:52.205649   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:20:52.205671   30437 round_trippers.go:577] Response Headers:
	I1101 00:20:52.205678   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:20:52.205683   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:20:52.205688   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:20:52.205693   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:20:52.205698   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:20:52 GMT
	I1101 00:20:52.205703   30437 round_trippers.go:580]     Audit-Id: 7c72572a-e36f-4a04-95d0-52b3d0f81db5
	I1101 00:20:52.205912   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7kvtf","generateName":"kube-proxy-","namespace":"kube-system","uid":"e2101b7f-e517-4100-905d-f46517e68255","resourceVersion":"983","creationTimestamp":"2023-11-01T00:09:23Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2d674cb3-a003-4ca9-a8b5-a283ae64b7c6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d674cb3-a003-4ca9-a8b5-a283ae64b7c6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5729 chars]
	I1101 00:20:52.402824   30437 request.go:629] Waited for 196.377215ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m02
	I1101 00:20:52.402883   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m02
	I1101 00:20:52.402890   30437 round_trippers.go:469] Request Headers:
	I1101 00:20:52.402897   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:20:52.402903   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:20:52.405911   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:20:52.405929   30437 round_trippers.go:577] Response Headers:
	I1101 00:20:52.405935   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:20:52.405941   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:20:52.405947   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:20:52.405954   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:20:52 GMT
	I1101 00:20:52.405961   30437 round_trippers.go:580]     Audit-Id: 628c459f-5ec4-4715-a2a1-3ec5dd3b1fad
	I1101 00:20:52.405968   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:20:52.406170   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483-m02","uid":"36dbbc60-53e2-44a7-8be1-589b70b73c26","resourceVersion":"1012","creationTimestamp":"2023-11-01T00:20:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:20:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}
}}}},{"manager":"kube-controller-manager","operation":"Update","apiVers [truncated 3671 chars]
	I1101 00:20:52.406447   30437 pod_ready.go:92] pod "kube-proxy-7kvtf" in "kube-system" namespace has status "Ready":"True"
	I1101 00:20:52.406463   30437 pod_ready.go:81] duration metric: took 368.01734ms waiting for pod "kube-proxy-7kvtf" in "kube-system" namespace to be "Ready" ...
	I1101 00:20:52.406473   30437 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-84g2n" in "kube-system" namespace to be "Ready" ...
	I1101 00:20:52.603005   30437 request.go:629] Waited for 196.470474ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-proxy-84g2n
	I1101 00:20:52.603133   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-proxy-84g2n
	I1101 00:20:52.603143   30437 round_trippers.go:469] Request Headers:
	I1101 00:20:52.603156   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:20:52.603168   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:20:52.607036   30437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:20:52.607065   30437 round_trippers.go:577] Response Headers:
	I1101 00:20:52.607075   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:20:52.607083   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:20:52.607093   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:20:52.607101   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:20:52 GMT
	I1101 00:20:52.607108   30437 round_trippers.go:580]     Audit-Id: f07f4dc7-8f30-48f2-b634-273581992d01
	I1101 00:20:52.607116   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:20:52.607314   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-84g2n","generateName":"kube-proxy-","namespace":"kube-system","uid":"a98efae3-9303-43be-a139-d21a5630c6b8","resourceVersion":"680","creationTimestamp":"2023-11-01T00:10:15Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2d674cb3-a003-4ca9-a8b5-a283ae64b7c6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:10:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d674cb3-a003-4ca9-a8b5-a283ae64b7c6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5517 chars]
	I1101 00:20:52.803183   30437 request.go:629] Waited for 195.369067ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m03
	I1101 00:20:52.803242   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m03
	I1101 00:20:52.803247   30437 round_trippers.go:469] Request Headers:
	I1101 00:20:52.803254   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:20:52.803261   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:20:52.806175   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:20:52.806204   30437 round_trippers.go:577] Response Headers:
	I1101 00:20:52.806214   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:20:52.806222   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:20:52.806230   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:20:52 GMT
	I1101 00:20:52.806238   30437 round_trippers.go:580]     Audit-Id: fb278318-950b-4a2b-9367-a6a334fbb4f0
	I1101 00:20:52.806246   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:20:52.806253   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:20:52.806377   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483-m03","uid":"5050dc91-014d-4a1c-b839-f60403866911","resourceVersion":"707","creationTimestamp":"2023-11-01T00:10:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:10:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3411 chars]
	I1101 00:20:52.806709   30437 pod_ready.go:92] pod "kube-proxy-84g2n" in "kube-system" namespace has status "Ready":"True"
	I1101 00:20:52.806730   30437 pod_ready.go:81] duration metric: took 400.251685ms waiting for pod "kube-proxy-84g2n" in "kube-system" namespace to be "Ready" ...
	I1101 00:20:52.806739   30437 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tq28b" in "kube-system" namespace to be "Ready" ...
	I1101 00:20:53.003023   30437 request.go:629] Waited for 196.206916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tq28b
	I1101 00:20:53.003082   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tq28b
	I1101 00:20:53.003087   30437 round_trippers.go:469] Request Headers:
	I1101 00:20:53.003094   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:20:53.003101   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:20:53.007172   30437 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1101 00:20:53.007204   30437 round_trippers.go:577] Response Headers:
	I1101 00:20:53.007215   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:20:53 GMT
	I1101 00:20:53.007223   30437 round_trippers.go:580]     Audit-Id: 11b82de4-cde8-4a33-a614-38b68f43a7c2
	I1101 00:20:53.007231   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:20:53.007238   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:20:53.007245   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:20:53.007251   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:20:53.007472   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tq28b","generateName":"kube-proxy-","namespace":"kube-system","uid":"9534d8b8-4536-4a0a-8af5-440e6871a85f","resourceVersion":"793","creationTimestamp":"2023-11-01T00:08:42Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2d674cb3-a003-4ca9-a8b5-a283ae64b7c6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d674cb3-a003-4ca9-a8b5-a283ae64b7c6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5517 chars]
	I1101 00:20:53.202824   30437 request.go:629] Waited for 194.933503ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:20:53.202890   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:20:53.202895   30437 round_trippers.go:469] Request Headers:
	I1101 00:20:53.202902   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:20:53.202909   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:20:53.206057   30437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:20:53.206079   30437 round_trippers.go:577] Response Headers:
	I1101 00:20:53.206086   30437 round_trippers.go:580]     Audit-Id: 1ac13e08-3539-4a57-9807-9e43fd72bb67
	I1101 00:20:53.206093   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:20:53.206098   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:20:53.206103   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:20:53.206108   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:20:53.206113   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:20:53 GMT
	I1101 00:20:53.206470   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"865","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 6220 chars]
	I1101 00:20:53.206802   30437 pod_ready.go:92] pod "kube-proxy-tq28b" in "kube-system" namespace has status "Ready":"True"
	I1101 00:20:53.206817   30437 pod_ready.go:81] duration metric: took 400.072987ms waiting for pod "kube-proxy-tq28b" in "kube-system" namespace to be "Ready" ...
	I1101 00:20:53.206826   30437 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:20:53.402198   30437 request.go:629] Waited for 195.311808ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-600483
	I1101 00:20:53.402265   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-600483
	I1101 00:20:53.402270   30437 round_trippers.go:469] Request Headers:
	I1101 00:20:53.402277   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:20:53.402283   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:20:53.405145   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:20:53.405176   30437 round_trippers.go:577] Response Headers:
	I1101 00:20:53.405183   30437 round_trippers.go:580]     Audit-Id: a635f966-6ac8-4c8c-9a42-03e2c3557597
	I1101 00:20:53.405190   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:20:53.405195   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:20:53.405200   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:20:53.405205   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:20:53.405211   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:20:53 GMT
	I1101 00:20:53.405411   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-600483","namespace":"kube-system","uid":"9cdd0be5-035a-49f5-8796-831ebde28bf0","resourceVersion":"826","creationTimestamp":"2023-11-01T00:08:30Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"01c4e8f68a00a3553dcff3388cb56149","kubernetes.io/config.mirror":"01c4e8f68a00a3553dcff3388cb56149","kubernetes.io/config.seen":"2023-11-01T00:08:30.293495470Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1101 00:20:53.603194   30437 request.go:629] Waited for 197.408473ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:20:53.603264   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:20:53.603270   30437 round_trippers.go:469] Request Headers:
	I1101 00:20:53.603279   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:20:53.603288   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:20:53.606516   30437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:20:53.606539   30437 round_trippers.go:577] Response Headers:
	I1101 00:20:53.606546   30437 round_trippers.go:580]     Audit-Id: 2e6690b1-6ffe-44fb-805b-a37fc09a4e7f
	I1101 00:20:53.606551   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:20:53.606556   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:20:53.606561   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:20:53.606566   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:20:53.606571   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:20:53 GMT
	I1101 00:20:53.606733   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"865","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 6220 chars]
	I1101 00:20:53.607036   30437 pod_ready.go:92] pod "kube-scheduler-multinode-600483" in "kube-system" namespace has status "Ready":"True"
	I1101 00:20:53.607050   30437 pod_ready.go:81] duration metric: took 400.218338ms waiting for pod "kube-scheduler-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:20:53.607060   30437 pod_ready.go:38] duration metric: took 1.60142774s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 00:20:53.607076   30437 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 00:20:53.607115   30437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 00:20:53.619657   30437 system_svc.go:56] duration metric: took 12.575182ms WaitForService to wait for kubelet.
	I1101 00:20:53.619687   30437 kubeadm.go:581] duration metric: took 1.639842176s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1101 00:20:53.619729   30437 node_conditions.go:102] verifying NodePressure condition ...
	I1101 00:20:53.803191   30437 request.go:629] Waited for 183.388101ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/nodes
	I1101 00:20:53.803247   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes
	I1101 00:20:53.803254   30437 round_trippers.go:469] Request Headers:
	I1101 00:20:53.803261   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:20:53.803267   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:20:53.806422   30437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:20:53.806480   30437 round_trippers.go:577] Response Headers:
	I1101 00:20:53.806492   30437 round_trippers.go:580]     Audit-Id: 3c32f194-26cd-425a-b6f6-2e6e5f10bc03
	I1101 00:20:53.806503   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:20:53.806511   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:20:53.806522   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:20:53.806531   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:20:53.806550   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:20:53 GMT
	I1101 00:20:53.806794   30437 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1021"},"items":[{"metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"865","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"manag
edFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1", [truncated 15340 chars]
	I1101 00:20:53.807357   30437 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 00:20:53.807376   30437 node_conditions.go:123] node cpu capacity is 2
	I1101 00:20:53.807385   30437 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 00:20:53.807389   30437 node_conditions.go:123] node cpu capacity is 2
	I1101 00:20:53.807392   30437 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 00:20:53.807396   30437 node_conditions.go:123] node cpu capacity is 2
	I1101 00:20:53.807399   30437 node_conditions.go:105] duration metric: took 187.665381ms to run NodePressure ...
	I1101 00:20:53.807410   30437 start.go:228] waiting for startup goroutines ...
	I1101 00:20:53.807434   30437 start.go:242] writing updated cluster config ...
	I1101 00:20:53.807888   30437 config.go:182] Loaded profile config "multinode-600483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:20:53.808010   30437 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/config.json ...
	I1101 00:20:53.811388   30437 out.go:177] * Starting worker node multinode-600483-m03 in cluster multinode-600483
	I1101 00:20:53.812775   30437 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 00:20:53.812804   30437 cache.go:56] Caching tarball of preloaded images
	I1101 00:20:53.812918   30437 preload.go:174] Found /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 00:20:53.812933   30437 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1101 00:20:53.813071   30437 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/config.json ...
	I1101 00:20:53.813298   30437 start.go:365] acquiring machines lock for multinode-600483-m03: {Name:mk7aad88408c319111b9be8e59d9593a9e88374b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 00:20:53.813344   30437 start.go:369] acquired machines lock for "multinode-600483-m03" in 25.63µs
	I1101 00:20:53.813363   30437 start.go:96] Skipping create...Using existing machine configuration
	I1101 00:20:53.813374   30437 fix.go:54] fixHost starting: m03
	I1101 00:20:53.813669   30437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1101 00:20:53.813695   30437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:20:53.828127   30437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45713
	I1101 00:20:53.828528   30437 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:20:53.828941   30437 main.go:141] libmachine: Using API Version  1
	I1101 00:20:53.828963   30437 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:20:53.829277   30437 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:20:53.829461   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .DriverName
	I1101 00:20:53.829603   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetState
	I1101 00:20:53.831102   30437 fix.go:102] recreateIfNeeded on multinode-600483-m03: state=Running err=<nil>
	W1101 00:20:53.831116   30437 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 00:20:53.834041   30437 out.go:177] * Updating the running kvm2 "multinode-600483-m03" VM ...
	I1101 00:20:53.835472   30437 machine.go:88] provisioning docker machine ...
	I1101 00:20:53.835502   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .DriverName
	I1101 00:20:53.835771   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetMachineName
	I1101 00:20:53.835947   30437 buildroot.go:166] provisioning hostname "multinode-600483-m03"
	I1101 00:20:53.835969   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetMachineName
	I1101 00:20:53.836093   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetSSHHostname
	I1101 00:20:53.838768   30437 main.go:141] libmachine: (multinode-600483-m03) DBG | domain multinode-600483-m03 has defined MAC address 52:54:00:f7:e6:77 in network mk-multinode-600483
	I1101 00:20:53.839218   30437 main.go:141] libmachine: (multinode-600483-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:e6:77", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:53 +0000 UTC Type:0 Mac:52:54:00:f7:e6:77 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:multinode-600483-m03 Clientid:01:52:54:00:f7:e6:77}
	I1101 00:20:53.839247   30437 main.go:141] libmachine: (multinode-600483-m03) DBG | domain multinode-600483-m03 has defined IP address 192.168.39.2 and MAC address 52:54:00:f7:e6:77 in network mk-multinode-600483
	I1101 00:20:53.839498   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetSSHPort
	I1101 00:20:53.839714   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetSSHKeyPath
	I1101 00:20:53.839884   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetSSHKeyPath
	I1101 00:20:53.840065   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetSSHUsername
	I1101 00:20:53.840237   30437 main.go:141] libmachine: Using SSH client type: native
	I1101 00:20:53.840562   30437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1101 00:20:53.840580   30437 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-600483-m03 && echo "multinode-600483-m03" | sudo tee /etc/hostname
	I1101 00:20:53.972860   30437 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-600483-m03
	
	I1101 00:20:53.972894   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetSSHHostname
	I1101 00:20:53.975711   30437 main.go:141] libmachine: (multinode-600483-m03) DBG | domain multinode-600483-m03 has defined MAC address 52:54:00:f7:e6:77 in network mk-multinode-600483
	I1101 00:20:53.976192   30437 main.go:141] libmachine: (multinode-600483-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:e6:77", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:53 +0000 UTC Type:0 Mac:52:54:00:f7:e6:77 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:multinode-600483-m03 Clientid:01:52:54:00:f7:e6:77}
	I1101 00:20:53.976228   30437 main.go:141] libmachine: (multinode-600483-m03) DBG | domain multinode-600483-m03 has defined IP address 192.168.39.2 and MAC address 52:54:00:f7:e6:77 in network mk-multinode-600483
	I1101 00:20:53.976374   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetSSHPort
	I1101 00:20:53.976548   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetSSHKeyPath
	I1101 00:20:53.976695   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetSSHKeyPath
	I1101 00:20:53.976807   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetSSHUsername
	I1101 00:20:53.977015   30437 main.go:141] libmachine: Using SSH client type: native
	I1101 00:20:53.977392   30437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1101 00:20:53.977412   30437 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-600483-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-600483-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-600483-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 00:20:54.088797   30437 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 00:20:54.088824   30437 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7305/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7305/.minikube}
	I1101 00:20:54.088845   30437 buildroot.go:174] setting up certificates
	I1101 00:20:54.088854   30437 provision.go:83] configureAuth start
	I1101 00:20:54.088871   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetMachineName
	I1101 00:20:54.089152   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetIP
	I1101 00:20:54.091507   30437 main.go:141] libmachine: (multinode-600483-m03) DBG | domain multinode-600483-m03 has defined MAC address 52:54:00:f7:e6:77 in network mk-multinode-600483
	I1101 00:20:54.091892   30437 main.go:141] libmachine: (multinode-600483-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:e6:77", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:53 +0000 UTC Type:0 Mac:52:54:00:f7:e6:77 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:multinode-600483-m03 Clientid:01:52:54:00:f7:e6:77}
	I1101 00:20:54.091946   30437 main.go:141] libmachine: (multinode-600483-m03) DBG | domain multinode-600483-m03 has defined IP address 192.168.39.2 and MAC address 52:54:00:f7:e6:77 in network mk-multinode-600483
	I1101 00:20:54.092021   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetSSHHostname
	I1101 00:20:54.094362   30437 main.go:141] libmachine: (multinode-600483-m03) DBG | domain multinode-600483-m03 has defined MAC address 52:54:00:f7:e6:77 in network mk-multinode-600483
	I1101 00:20:54.094791   30437 main.go:141] libmachine: (multinode-600483-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:e6:77", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:53 +0000 UTC Type:0 Mac:52:54:00:f7:e6:77 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:multinode-600483-m03 Clientid:01:52:54:00:f7:e6:77}
	I1101 00:20:54.094827   30437 main.go:141] libmachine: (multinode-600483-m03) DBG | domain multinode-600483-m03 has defined IP address 192.168.39.2 and MAC address 52:54:00:f7:e6:77 in network mk-multinode-600483
	I1101 00:20:54.094971   30437 provision.go:138] copyHostCerts
	I1101 00:20:54.095004   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 00:20:54.095050   30437 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem, removing ...
	I1101 00:20:54.095064   30437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 00:20:54.095140   30437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem (1078 bytes)
	I1101 00:20:54.095241   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 00:20:54.095265   30437 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem, removing ...
	I1101 00:20:54.095271   30437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 00:20:54.095309   30437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem (1123 bytes)
	I1101 00:20:54.095392   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 00:20:54.095425   30437 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem, removing ...
	I1101 00:20:54.095434   30437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 00:20:54.095470   30437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem (1675 bytes)
	I1101 00:20:54.095553   30437 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem org=jenkins.multinode-600483-m03 san=[192.168.39.2 192.168.39.2 localhost 127.0.0.1 minikube multinode-600483-m03]
	I1101 00:20:54.180864   30437 provision.go:172] copyRemoteCerts
	I1101 00:20:54.180932   30437 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 00:20:54.180960   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetSSHHostname
	I1101 00:20:54.183919   30437 main.go:141] libmachine: (multinode-600483-m03) DBG | domain multinode-600483-m03 has defined MAC address 52:54:00:f7:e6:77 in network mk-multinode-600483
	I1101 00:20:54.184341   30437 main.go:141] libmachine: (multinode-600483-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:e6:77", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:53 +0000 UTC Type:0 Mac:52:54:00:f7:e6:77 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:multinode-600483-m03 Clientid:01:52:54:00:f7:e6:77}
	I1101 00:20:54.184378   30437 main.go:141] libmachine: (multinode-600483-m03) DBG | domain multinode-600483-m03 has defined IP address 192.168.39.2 and MAC address 52:54:00:f7:e6:77 in network mk-multinode-600483
	I1101 00:20:54.184567   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetSSHPort
	I1101 00:20:54.184770   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetSSHKeyPath
	I1101 00:20:54.184928   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetSSHUsername
	I1101 00:20:54.185084   30437 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483-m03/id_rsa Username:docker}
	I1101 00:20:54.269309   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1101 00:20:54.269392   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 00:20:54.291405   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1101 00:20:54.291491   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1101 00:20:54.313233   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1101 00:20:54.313317   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 00:20:54.335640   30437 provision.go:86] duration metric: configureAuth took 246.771279ms
	I1101 00:20:54.335670   30437 buildroot.go:189] setting minikube options for container-runtime
	I1101 00:20:54.335882   30437 config.go:182] Loaded profile config "multinode-600483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:20:54.335975   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetSSHHostname
	I1101 00:20:54.338526   30437 main.go:141] libmachine: (multinode-600483-m03) DBG | domain multinode-600483-m03 has defined MAC address 52:54:00:f7:e6:77 in network mk-multinode-600483
	I1101 00:20:54.338921   30437 main.go:141] libmachine: (multinode-600483-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:e6:77", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:53 +0000 UTC Type:0 Mac:52:54:00:f7:e6:77 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:multinode-600483-m03 Clientid:01:52:54:00:f7:e6:77}
	I1101 00:20:54.338958   30437 main.go:141] libmachine: (multinode-600483-m03) DBG | domain multinode-600483-m03 has defined IP address 192.168.39.2 and MAC address 52:54:00:f7:e6:77 in network mk-multinode-600483
	I1101 00:20:54.339127   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetSSHPort
	I1101 00:20:54.339330   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetSSHKeyPath
	I1101 00:20:54.339547   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetSSHKeyPath
	I1101 00:20:54.339701   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetSSHUsername
	I1101 00:20:54.339862   30437 main.go:141] libmachine: Using SSH client type: native
	I1101 00:20:54.340210   30437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1101 00:20:54.340227   30437 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 00:22:24.878490   30437 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 00:22:24.878517   30437 machine.go:91] provisioned docker machine in 1m31.043029042s
	I1101 00:22:24.878526   30437 start.go:300] post-start starting for "multinode-600483-m03" (driver="kvm2")
	I1101 00:22:24.878538   30437 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 00:22:24.878558   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .DriverName
	I1101 00:22:24.878903   30437 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 00:22:24.878931   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetSSHHostname
	I1101 00:22:24.882499   30437 main.go:141] libmachine: (multinode-600483-m03) DBG | domain multinode-600483-m03 has defined MAC address 52:54:00:f7:e6:77 in network mk-multinode-600483
	I1101 00:22:24.882967   30437 main.go:141] libmachine: (multinode-600483-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:e6:77", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:53 +0000 UTC Type:0 Mac:52:54:00:f7:e6:77 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:multinode-600483-m03 Clientid:01:52:54:00:f7:e6:77}
	I1101 00:22:24.883000   30437 main.go:141] libmachine: (multinode-600483-m03) DBG | domain multinode-600483-m03 has defined IP address 192.168.39.2 and MAC address 52:54:00:f7:e6:77 in network mk-multinode-600483
	I1101 00:22:24.883155   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetSSHPort
	I1101 00:22:24.883356   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetSSHKeyPath
	I1101 00:22:24.883551   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetSSHUsername
	I1101 00:22:24.883720   30437 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483-m03/id_rsa Username:docker}
	I1101 00:22:24.972434   30437 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 00:22:24.976470   30437 command_runner.go:130] > NAME=Buildroot
	I1101 00:22:24.976493   30437 command_runner.go:130] > VERSION=2021.02.12-1-g0cee705-dirty
	I1101 00:22:24.976500   30437 command_runner.go:130] > ID=buildroot
	I1101 00:22:24.976508   30437 command_runner.go:130] > VERSION_ID=2021.02.12
	I1101 00:22:24.976520   30437 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1101 00:22:24.976629   30437 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 00:22:24.976649   30437 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/addons for local assets ...
	I1101 00:22:24.976807   30437 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/files for local assets ...
	I1101 00:22:24.976955   30437 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> 145042.pem in /etc/ssl/certs
	I1101 00:22:24.976973   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> /etc/ssl/certs/145042.pem
	I1101 00:22:24.977099   30437 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 00:22:24.987277   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /etc/ssl/certs/145042.pem (1708 bytes)
	I1101 00:22:25.012049   30437 start.go:303] post-start completed in 133.502963ms
	I1101 00:22:25.012078   30437 fix.go:56] fixHost completed within 1m31.198703183s
	I1101 00:22:25.012100   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetSSHHostname
	I1101 00:22:25.015052   30437 main.go:141] libmachine: (multinode-600483-m03) DBG | domain multinode-600483-m03 has defined MAC address 52:54:00:f7:e6:77 in network mk-multinode-600483
	I1101 00:22:25.015428   30437 main.go:141] libmachine: (multinode-600483-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:e6:77", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:53 +0000 UTC Type:0 Mac:52:54:00:f7:e6:77 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:multinode-600483-m03 Clientid:01:52:54:00:f7:e6:77}
	I1101 00:22:25.015458   30437 main.go:141] libmachine: (multinode-600483-m03) DBG | domain multinode-600483-m03 has defined IP address 192.168.39.2 and MAC address 52:54:00:f7:e6:77 in network mk-multinode-600483
	I1101 00:22:25.015670   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetSSHPort
	I1101 00:22:25.015917   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetSSHKeyPath
	I1101 00:22:25.016095   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetSSHKeyPath
	I1101 00:22:25.016236   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetSSHUsername
	I1101 00:22:25.016412   30437 main.go:141] libmachine: Using SSH client type: native
	I1101 00:22:25.016728   30437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1101 00:22:25.016740   30437 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1101 00:22:25.132667   30437 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698798145.124815918
	
	I1101 00:22:25.132688   30437 fix.go:206] guest clock: 1698798145.124815918
	I1101 00:22:25.132696   30437 fix.go:219] Guest: 2023-11-01 00:22:25.124815918 +0000 UTC Remote: 2023-11-01 00:22:25.01208304 +0000 UTC m=+552.083449806 (delta=112.732878ms)
	I1101 00:22:25.132711   30437 fix.go:190] guest clock delta is within tolerance: 112.732878ms
	I1101 00:22:25.132716   30437 start.go:83] releasing machines lock for "multinode-600483-m03", held for 1m31.319360259s
	I1101 00:22:25.132735   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .DriverName
	I1101 00:22:25.132985   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetIP
	I1101 00:22:25.135972   30437 main.go:141] libmachine: (multinode-600483-m03) DBG | domain multinode-600483-m03 has defined MAC address 52:54:00:f7:e6:77 in network mk-multinode-600483
	I1101 00:22:25.136412   30437 main.go:141] libmachine: (multinode-600483-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:e6:77", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:53 +0000 UTC Type:0 Mac:52:54:00:f7:e6:77 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:multinode-600483-m03 Clientid:01:52:54:00:f7:e6:77}
	I1101 00:22:25.136444   30437 main.go:141] libmachine: (multinode-600483-m03) DBG | domain multinode-600483-m03 has defined IP address 192.168.39.2 and MAC address 52:54:00:f7:e6:77 in network mk-multinode-600483
	I1101 00:22:25.138868   30437 out.go:177] * Found network options:
	I1101 00:22:25.140609   30437 out.go:177]   - NO_PROXY=192.168.39.130,192.168.39.109
	W1101 00:22:25.142168   30437 proxy.go:119] fail to check proxy env: Error ip not in block
	W1101 00:22:25.142189   30437 proxy.go:119] fail to check proxy env: Error ip not in block
	I1101 00:22:25.142204   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .DriverName
	I1101 00:22:25.142847   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .DriverName
	I1101 00:22:25.143050   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .DriverName
	I1101 00:22:25.143140   30437 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 00:22:25.143181   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetSSHHostname
	W1101 00:22:25.143253   30437 proxy.go:119] fail to check proxy env: Error ip not in block
	W1101 00:22:25.143282   30437 proxy.go:119] fail to check proxy env: Error ip not in block
	I1101 00:22:25.143356   30437 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 00:22:25.143463   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetSSHHostname
	I1101 00:22:25.146075   30437 main.go:141] libmachine: (multinode-600483-m03) DBG | domain multinode-600483-m03 has defined MAC address 52:54:00:f7:e6:77 in network mk-multinode-600483
	I1101 00:22:25.146408   30437 main.go:141] libmachine: (multinode-600483-m03) DBG | domain multinode-600483-m03 has defined MAC address 52:54:00:f7:e6:77 in network mk-multinode-600483
	I1101 00:22:25.146491   30437 main.go:141] libmachine: (multinode-600483-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:e6:77", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:53 +0000 UTC Type:0 Mac:52:54:00:f7:e6:77 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:multinode-600483-m03 Clientid:01:52:54:00:f7:e6:77}
	I1101 00:22:25.146525   30437 main.go:141] libmachine: (multinode-600483-m03) DBG | domain multinode-600483-m03 has defined IP address 192.168.39.2 and MAC address 52:54:00:f7:e6:77 in network mk-multinode-600483
	I1101 00:22:25.146676   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetSSHPort
	I1101 00:22:25.146803   30437 main.go:141] libmachine: (multinode-600483-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:e6:77", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:53 +0000 UTC Type:0 Mac:52:54:00:f7:e6:77 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:multinode-600483-m03 Clientid:01:52:54:00:f7:e6:77}
	I1101 00:22:25.146827   30437 main.go:141] libmachine: (multinode-600483-m03) DBG | domain multinode-600483-m03 has defined IP address 192.168.39.2 and MAC address 52:54:00:f7:e6:77 in network mk-multinode-600483
	I1101 00:22:25.146849   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetSSHKeyPath
	I1101 00:22:25.147018   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetSSHUsername
	I1101 00:22:25.147021   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetSSHPort
	I1101 00:22:25.147190   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetSSHKeyPath
	I1101 00:22:25.147229   30437 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483-m03/id_rsa Username:docker}
	I1101 00:22:25.147333   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetSSHUsername
	I1101 00:22:25.147488   30437 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483-m03/id_rsa Username:docker}
	I1101 00:22:25.274510   30437 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1101 00:22:25.386914   30437 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1101 00:22:25.392752   30437 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1101 00:22:25.392875   30437 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 00:22:25.392939   30437 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 00:22:25.402745   30437 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 00:22:25.402771   30437 start.go:472] detecting cgroup driver to use...
	I1101 00:22:25.402824   30437 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 00:22:25.418529   30437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 00:22:25.432353   30437 docker.go:204] disabling cri-docker service (if available) ...
	I1101 00:22:25.432410   30437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 00:22:25.446702   30437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 00:22:25.459533   30437 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 00:22:25.583675   30437 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 00:22:25.699043   30437 docker.go:220] disabling docker service ...
	I1101 00:22:25.699111   30437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 00:22:25.713378   30437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 00:22:25.727411   30437 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 00:22:25.857242   30437 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 00:22:26.009713   30437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 00:22:26.023151   30437 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 00:22:26.040145   30437 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1101 00:22:26.040613   30437 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 00:22:26.040664   30437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:22:26.050163   30437 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 00:22:26.050231   30437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:22:26.060367   30437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:22:26.069909   30437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:22:26.079776   30437 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 00:22:26.089089   30437 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 00:22:26.097065   30437 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1101 00:22:26.097398   30437 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 00:22:26.106189   30437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:22:26.252482   30437 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 00:22:26.475715   30437 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 00:22:26.475799   30437 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 00:22:26.480523   30437 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1101 00:22:26.480543   30437 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1101 00:22:26.480550   30437 command_runner.go:130] > Device: 16h/22d	Inode: 1200        Links: 1
	I1101 00:22:26.480557   30437 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1101 00:22:26.480562   30437 command_runner.go:130] > Access: 2023-11-01 00:22:26.403445343 +0000
	I1101 00:22:26.480576   30437 command_runner.go:130] > Modify: 2023-11-01 00:22:26.403445343 +0000
	I1101 00:22:26.480581   30437 command_runner.go:130] > Change: 2023-11-01 00:22:26.403445343 +0000
	I1101 00:22:26.480585   30437 command_runner.go:130] >  Birth: -
	I1101 00:22:26.480600   30437 start.go:540] Will wait 60s for crictl version
	I1101 00:22:26.480640   30437 ssh_runner.go:195] Run: which crictl
	I1101 00:22:26.484423   30437 command_runner.go:130] > /usr/bin/crictl
	I1101 00:22:26.484683   30437 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 00:22:26.523095   30437 command_runner.go:130] > Version:  0.1.0
	I1101 00:22:26.523119   30437 command_runner.go:130] > RuntimeName:  cri-o
	I1101 00:22:26.523124   30437 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1101 00:22:26.523130   30437 command_runner.go:130] > RuntimeApiVersion:  v1
	I1101 00:22:26.523144   30437 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1101 00:22:26.523209   30437 ssh_runner.go:195] Run: crio --version
	I1101 00:22:26.568141   30437 command_runner.go:130] > crio version 1.24.1
	I1101 00:22:26.568165   30437 command_runner.go:130] > Version:          1.24.1
	I1101 00:22:26.568175   30437 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1101 00:22:26.568181   30437 command_runner.go:130] > GitTreeState:     dirty
	I1101 00:22:26.568190   30437 command_runner.go:130] > BuildDate:        2023-10-31T22:57:11Z
	I1101 00:22:26.568196   30437 command_runner.go:130] > GoVersion:        go1.19.9
	I1101 00:22:26.568203   30437 command_runner.go:130] > Compiler:         gc
	I1101 00:22:26.568209   30437 command_runner.go:130] > Platform:         linux/amd64
	I1101 00:22:26.568216   30437 command_runner.go:130] > Linkmode:         dynamic
	I1101 00:22:26.568229   30437 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1101 00:22:26.568242   30437 command_runner.go:130] > SeccompEnabled:   true
	I1101 00:22:26.568253   30437 command_runner.go:130] > AppArmorEnabled:  false
	I1101 00:22:26.569704   30437 ssh_runner.go:195] Run: crio --version
	I1101 00:22:26.616914   30437 command_runner.go:130] > crio version 1.24.1
	I1101 00:22:26.616938   30437 command_runner.go:130] > Version:          1.24.1
	I1101 00:22:26.616948   30437 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1101 00:22:26.616956   30437 command_runner.go:130] > GitTreeState:     dirty
	I1101 00:22:26.616965   30437 command_runner.go:130] > BuildDate:        2023-10-31T22:57:11Z
	I1101 00:22:26.616972   30437 command_runner.go:130] > GoVersion:        go1.19.9
	I1101 00:22:26.616979   30437 command_runner.go:130] > Compiler:         gc
	I1101 00:22:26.616986   30437 command_runner.go:130] > Platform:         linux/amd64
	I1101 00:22:26.616998   30437 command_runner.go:130] > Linkmode:         dynamic
	I1101 00:22:26.617012   30437 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1101 00:22:26.617022   30437 command_runner.go:130] > SeccompEnabled:   true
	I1101 00:22:26.617030   30437 command_runner.go:130] > AppArmorEnabled:  false
	I1101 00:22:26.620095   30437 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1101 00:22:26.621559   30437 out.go:177]   - env NO_PROXY=192.168.39.130
	I1101 00:22:26.623038   30437 out.go:177]   - env NO_PROXY=192.168.39.130,192.168.39.109
	I1101 00:22:26.624296   30437 main.go:141] libmachine: (multinode-600483-m03) Calling .GetIP
	I1101 00:22:26.627002   30437 main.go:141] libmachine: (multinode-600483-m03) DBG | domain multinode-600483-m03 has defined MAC address 52:54:00:f7:e6:77 in network mk-multinode-600483
	I1101 00:22:26.627425   30437 main.go:141] libmachine: (multinode-600483-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:e6:77", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:53 +0000 UTC Type:0 Mac:52:54:00:f7:e6:77 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:multinode-600483-m03 Clientid:01:52:54:00:f7:e6:77}
	I1101 00:22:26.627448   30437 main.go:141] libmachine: (multinode-600483-m03) DBG | domain multinode-600483-m03 has defined IP address 192.168.39.2 and MAC address 52:54:00:f7:e6:77 in network mk-multinode-600483
	I1101 00:22:26.627666   30437 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1101 00:22:26.631776   30437 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1101 00:22:26.631819   30437 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483 for IP: 192.168.39.2
	I1101 00:22:26.631833   30437 certs.go:190] acquiring lock for shared ca certs: {Name:mk6185b3938c4b51e80f57b7c81926c81b632b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:22:26.631990   30437 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key
	I1101 00:22:26.632041   30437 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key
	I1101 00:22:26.632058   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1101 00:22:26.632077   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1101 00:22:26.632094   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1101 00:22:26.632111   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1101 00:22:26.632176   30437 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem (1338 bytes)
	W1101 00:22:26.632211   30437 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504_empty.pem, impossibly tiny 0 bytes
	I1101 00:22:26.632227   30437 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 00:22:26.632276   30437 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem (1078 bytes)
	I1101 00:22:26.632312   30437 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem (1123 bytes)
	I1101 00:22:26.632345   30437 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem (1675 bytes)
	I1101 00:22:26.632406   30437 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem (1708 bytes)
	I1101 00:22:26.632442   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> /usr/share/ca-certificates/145042.pem
	I1101 00:22:26.632462   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:22:26.632482   30437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem -> /usr/share/ca-certificates/14504.pem
	I1101 00:22:26.632832   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 00:22:26.657500   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 00:22:26.681524   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 00:22:26.704708   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 00:22:26.727861   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /usr/share/ca-certificates/145042.pem (1708 bytes)
	I1101 00:22:26.750527   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 00:22:26.773595   30437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem --> /usr/share/ca-certificates/14504.pem (1338 bytes)
	I1101 00:22:26.797619   30437 ssh_runner.go:195] Run: openssl version
	I1101 00:22:26.803066   30437 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1101 00:22:26.803126   30437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145042.pem && ln -fs /usr/share/ca-certificates/145042.pem /etc/ssl/certs/145042.pem"
	I1101 00:22:26.812677   30437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145042.pem
	I1101 00:22:26.817240   30437 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 31 23:54 /usr/share/ca-certificates/145042.pem
	I1101 00:22:26.817275   30437 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:54 /usr/share/ca-certificates/145042.pem
	I1101 00:22:26.817323   30437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145042.pem
	I1101 00:22:26.823045   30437 command_runner.go:130] > 3ec20f2e
	I1101 00:22:26.823140   30437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145042.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 00:22:26.831694   30437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 00:22:26.842267   30437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:22:26.847474   30437 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:22:26.847512   30437 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:22:26.847555   30437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:22:26.853218   30437 command_runner.go:130] > b5213941
	I1101 00:22:26.853291   30437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 00:22:26.863709   30437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14504.pem && ln -fs /usr/share/ca-certificates/14504.pem /etc/ssl/certs/14504.pem"
	I1101 00:22:26.873534   30437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14504.pem
	I1101 00:22:26.878640   30437 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 31 23:54 /usr/share/ca-certificates/14504.pem
	I1101 00:22:26.878667   30437 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:54 /usr/share/ca-certificates/14504.pem
	I1101 00:22:26.878708   30437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14504.pem
	I1101 00:22:26.884032   30437 command_runner.go:130] > 51391683
	I1101 00:22:26.884302   30437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14504.pem /etc/ssl/certs/51391683.0"
	I1101 00:22:26.892452   30437 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 00:22:26.896361   30437 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1101 00:22:26.896426   30437 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1101 00:22:26.896526   30437 ssh_runner.go:195] Run: crio config
	I1101 00:22:26.945533   30437 command_runner.go:130] ! time="2023-11-01 00:22:26.937581089Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1101 00:22:26.945580   30437 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1101 00:22:26.952797   30437 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1101 00:22:26.952825   30437 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1101 00:22:26.952836   30437 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1101 00:22:26.952841   30437 command_runner.go:130] > #
	I1101 00:22:26.952851   30437 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1101 00:22:26.952862   30437 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1101 00:22:26.952871   30437 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1101 00:22:26.952882   30437 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1101 00:22:26.952889   30437 command_runner.go:130] > # reload'.
	I1101 00:22:26.952904   30437 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1101 00:22:26.952917   30437 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1101 00:22:26.952932   30437 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1101 00:22:26.952945   30437 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1101 00:22:26.952954   30437 command_runner.go:130] > [crio]
	I1101 00:22:26.952965   30437 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1101 00:22:26.952977   30437 command_runner.go:130] > # containers images, in this directory.
	I1101 00:22:26.952985   30437 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1101 00:22:26.953003   30437 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1101 00:22:26.953014   30437 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1101 00:22:26.953028   30437 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1101 00:22:26.953041   30437 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1101 00:22:26.953048   30437 command_runner.go:130] > storage_driver = "overlay"
	I1101 00:22:26.953060   30437 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1101 00:22:26.953072   30437 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1101 00:22:26.953082   30437 command_runner.go:130] > storage_option = [
	I1101 00:22:26.953099   30437 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1101 00:22:26.953119   30437 command_runner.go:130] > ]
	I1101 00:22:26.953128   30437 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1101 00:22:26.953162   30437 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1101 00:22:26.953176   30437 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1101 00:22:26.953188   30437 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1101 00:22:26.953200   30437 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1101 00:22:26.953210   30437 command_runner.go:130] > # always happen on a node reboot
	I1101 00:22:26.953217   30437 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1101 00:22:26.953229   30437 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1101 00:22:26.953241   30437 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1101 00:22:26.953262   30437 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1101 00:22:26.953272   30437 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1101 00:22:26.953289   30437 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1101 00:22:26.953304   30437 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1101 00:22:26.953313   30437 command_runner.go:130] > # internal_wipe = true
	I1101 00:22:26.953322   30437 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1101 00:22:26.953335   30437 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1101 00:22:26.953348   30437 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1101 00:22:26.953360   30437 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1101 00:22:26.953373   30437 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1101 00:22:26.953379   30437 command_runner.go:130] > [crio.api]
	I1101 00:22:26.953388   30437 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1101 00:22:26.953398   30437 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1101 00:22:26.953407   30437 command_runner.go:130] > # IP address on which the stream server will listen.
	I1101 00:22:26.953418   30437 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1101 00:22:26.953428   30437 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1101 00:22:26.953440   30437 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1101 00:22:26.953450   30437 command_runner.go:130] > # stream_port = "0"
	I1101 00:22:26.953461   30437 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1101 00:22:26.953471   30437 command_runner.go:130] > # stream_enable_tls = false
	I1101 00:22:26.953484   30437 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1101 00:22:26.953491   30437 command_runner.go:130] > # stream_idle_timeout = ""
	I1101 00:22:26.953503   30437 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1101 00:22:26.953516   30437 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1101 00:22:26.953526   30437 command_runner.go:130] > # minutes.
	I1101 00:22:26.953532   30437 command_runner.go:130] > # stream_tls_cert = ""
	I1101 00:22:26.953545   30437 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1101 00:22:26.953558   30437 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1101 00:22:26.953568   30437 command_runner.go:130] > # stream_tls_key = ""
	I1101 00:22:26.953583   30437 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1101 00:22:26.953597   30437 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1101 00:22:26.953608   30437 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1101 00:22:26.953618   30437 command_runner.go:130] > # stream_tls_ca = ""
	I1101 00:22:26.953633   30437 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1101 00:22:26.953643   30437 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1101 00:22:26.953657   30437 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1101 00:22:26.953667   30437 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1101 00:22:26.953697   30437 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1101 00:22:26.953710   30437 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1101 00:22:26.953718   30437 command_runner.go:130] > [crio.runtime]
	I1101 00:22:26.953730   30437 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1101 00:22:26.953743   30437 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1101 00:22:26.953753   30437 command_runner.go:130] > # "nofile=1024:2048"
	I1101 00:22:26.953764   30437 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1101 00:22:26.953774   30437 command_runner.go:130] > # default_ulimits = [
	I1101 00:22:26.953780   30437 command_runner.go:130] > # ]
	I1101 00:22:26.953789   30437 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1101 00:22:26.953797   30437 command_runner.go:130] > # no_pivot = false
	I1101 00:22:26.953803   30437 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1101 00:22:26.953811   30437 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1101 00:22:26.953817   30437 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1101 00:22:26.953823   30437 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1101 00:22:26.953830   30437 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1101 00:22:26.953837   30437 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1101 00:22:26.953844   30437 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1101 00:22:26.953849   30437 command_runner.go:130] > # Cgroup setting for conmon
	I1101 00:22:26.953858   30437 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1101 00:22:26.953864   30437 command_runner.go:130] > conmon_cgroup = "pod"
	I1101 00:22:26.953870   30437 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1101 00:22:26.953878   30437 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1101 00:22:26.953885   30437 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1101 00:22:26.953890   30437 command_runner.go:130] > conmon_env = [
	I1101 00:22:26.953898   30437 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1101 00:22:26.953902   30437 command_runner.go:130] > ]
	I1101 00:22:26.953907   30437 command_runner.go:130] > # Additional environment variables to set for all the
	I1101 00:22:26.953914   30437 command_runner.go:130] > # containers. These are overridden if set in the
	I1101 00:22:26.953920   30437 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1101 00:22:26.953925   30437 command_runner.go:130] > # default_env = [
	I1101 00:22:26.953929   30437 command_runner.go:130] > # ]
	I1101 00:22:26.953935   30437 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1101 00:22:26.953940   30437 command_runner.go:130] > # selinux = false
	I1101 00:22:26.953948   30437 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1101 00:22:26.953957   30437 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1101 00:22:26.953962   30437 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1101 00:22:26.953968   30437 command_runner.go:130] > # seccomp_profile = ""
	I1101 00:22:26.953973   30437 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1101 00:22:26.953981   30437 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1101 00:22:26.953988   30437 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1101 00:22:26.953995   30437 command_runner.go:130] > # which might increase security.
	I1101 00:22:26.953999   30437 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1101 00:22:26.954006   30437 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1101 00:22:26.954014   30437 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1101 00:22:26.954020   30437 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1101 00:22:26.954029   30437 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1101 00:22:26.954034   30437 command_runner.go:130] > # This option supports live configuration reload.
	I1101 00:22:26.954041   30437 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1101 00:22:26.954047   30437 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1101 00:22:26.954053   30437 command_runner.go:130] > # the cgroup blockio controller.
	I1101 00:22:26.954058   30437 command_runner.go:130] > # blockio_config_file = ""
	I1101 00:22:26.954067   30437 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1101 00:22:26.954071   30437 command_runner.go:130] > # irqbalance daemon.
	I1101 00:22:26.954086   30437 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1101 00:22:26.954097   30437 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1101 00:22:26.954103   30437 command_runner.go:130] > # This option supports live configuration reload.
	I1101 00:22:26.954110   30437 command_runner.go:130] > # rdt_config_file = ""
	I1101 00:22:26.954115   30437 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1101 00:22:26.954120   30437 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1101 00:22:26.954127   30437 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1101 00:22:26.954133   30437 command_runner.go:130] > # separate_pull_cgroup = ""
	I1101 00:22:26.954140   30437 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1101 00:22:26.954148   30437 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1101 00:22:26.954161   30437 command_runner.go:130] > # will be added.
	I1101 00:22:26.954168   30437 command_runner.go:130] > # default_capabilities = [
	I1101 00:22:26.954172   30437 command_runner.go:130] > # 	"CHOWN",
	I1101 00:22:26.954178   30437 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1101 00:22:26.954183   30437 command_runner.go:130] > # 	"FSETID",
	I1101 00:22:26.954187   30437 command_runner.go:130] > # 	"FOWNER",
	I1101 00:22:26.954193   30437 command_runner.go:130] > # 	"SETGID",
	I1101 00:22:26.954197   30437 command_runner.go:130] > # 	"SETUID",
	I1101 00:22:26.954201   30437 command_runner.go:130] > # 	"SETPCAP",
	I1101 00:22:26.954206   30437 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1101 00:22:26.954210   30437 command_runner.go:130] > # 	"KILL",
	I1101 00:22:26.954216   30437 command_runner.go:130] > # ]
	I1101 00:22:26.954222   30437 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1101 00:22:26.954229   30437 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1101 00:22:26.954233   30437 command_runner.go:130] > # default_sysctls = [
	I1101 00:22:26.954239   30437 command_runner.go:130] > # ]
	I1101 00:22:26.954244   30437 command_runner.go:130] > # List of devices on the host that a
	I1101 00:22:26.954252   30437 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1101 00:22:26.954258   30437 command_runner.go:130] > # allowed_devices = [
	I1101 00:22:26.954262   30437 command_runner.go:130] > # 	"/dev/fuse",
	I1101 00:22:26.954268   30437 command_runner.go:130] > # ]
	I1101 00:22:26.954273   30437 command_runner.go:130] > # List of additional devices. specified as
	I1101 00:22:26.954280   30437 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1101 00:22:26.954288   30437 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1101 00:22:26.954310   30437 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1101 00:22:26.954317   30437 command_runner.go:130] > # additional_devices = [
	I1101 00:22:26.954320   30437 command_runner.go:130] > # ]
	I1101 00:22:26.954327   30437 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1101 00:22:26.954331   30437 command_runner.go:130] > # cdi_spec_dirs = [
	I1101 00:22:26.954335   30437 command_runner.go:130] > # 	"/etc/cdi",
	I1101 00:22:26.954341   30437 command_runner.go:130] > # 	"/var/run/cdi",
	I1101 00:22:26.954345   30437 command_runner.go:130] > # ]
	I1101 00:22:26.954353   30437 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1101 00:22:26.954361   30437 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1101 00:22:26.954367   30437 command_runner.go:130] > # Defaults to false.
	I1101 00:22:26.954372   30437 command_runner.go:130] > # device_ownership_from_security_context = false
	I1101 00:22:26.954379   30437 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1101 00:22:26.954388   30437 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1101 00:22:26.954393   30437 command_runner.go:130] > # hooks_dir = [
	I1101 00:22:26.954400   30437 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1101 00:22:26.954404   30437 command_runner.go:130] > # ]
	I1101 00:22:26.954412   30437 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1101 00:22:26.954419   30437 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1101 00:22:26.954424   30437 command_runner.go:130] > # its default mounts from the following two files:
	I1101 00:22:26.954429   30437 command_runner.go:130] > #
	I1101 00:22:26.954435   30437 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1101 00:22:26.954444   30437 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1101 00:22:26.954452   30437 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1101 00:22:26.954456   30437 command_runner.go:130] > #
	I1101 00:22:26.954465   30437 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1101 00:22:26.954471   30437 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1101 00:22:26.954481   30437 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1101 00:22:26.954486   30437 command_runner.go:130] > #      only add mounts it finds in this file.
	I1101 00:22:26.954492   30437 command_runner.go:130] > #
	I1101 00:22:26.954496   30437 command_runner.go:130] > # default_mounts_file = ""
	I1101 00:22:26.954502   30437 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1101 00:22:26.954511   30437 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1101 00:22:26.954515   30437 command_runner.go:130] > pids_limit = 1024
	I1101 00:22:26.954522   30437 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1101 00:22:26.954531   30437 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1101 00:22:26.954537   30437 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1101 00:22:26.954548   30437 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1101 00:22:26.954555   30437 command_runner.go:130] > # log_size_max = -1
	I1101 00:22:26.954562   30437 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1101 00:22:26.954568   30437 command_runner.go:130] > # log_to_journald = false
	I1101 00:22:26.954577   30437 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1101 00:22:26.954588   30437 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1101 00:22:26.954596   30437 command_runner.go:130] > # Path to directory for container attach sockets.
	I1101 00:22:26.954607   30437 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1101 00:22:26.954619   30437 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1101 00:22:26.954629   30437 command_runner.go:130] > # bind_mount_prefix = ""
	I1101 00:22:26.954637   30437 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1101 00:22:26.954649   30437 command_runner.go:130] > # read_only = false
	I1101 00:22:26.954663   30437 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1101 00:22:26.954676   30437 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1101 00:22:26.954686   30437 command_runner.go:130] > # live configuration reload.
	I1101 00:22:26.954692   30437 command_runner.go:130] > # log_level = "info"
	I1101 00:22:26.954699   30437 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1101 00:22:26.954705   30437 command_runner.go:130] > # This option supports live configuration reload.
	I1101 00:22:26.954712   30437 command_runner.go:130] > # log_filter = ""
	I1101 00:22:26.954717   30437 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1101 00:22:26.954726   30437 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1101 00:22:26.954730   30437 command_runner.go:130] > # separated by comma.
	I1101 00:22:26.954735   30437 command_runner.go:130] > # uid_mappings = ""
	I1101 00:22:26.954743   30437 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1101 00:22:26.954750   30437 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1101 00:22:26.954756   30437 command_runner.go:130] > # separated by comma.
	I1101 00:22:26.954761   30437 command_runner.go:130] > # gid_mappings = ""
	I1101 00:22:26.954767   30437 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1101 00:22:26.954775   30437 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1101 00:22:26.954781   30437 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1101 00:22:26.954788   30437 command_runner.go:130] > # minimum_mappable_uid = -1
	I1101 00:22:26.954794   30437 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1101 00:22:26.954800   30437 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1101 00:22:26.954808   30437 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1101 00:22:26.954813   30437 command_runner.go:130] > # minimum_mappable_gid = -1
	I1101 00:22:26.954824   30437 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1101 00:22:26.954830   30437 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1101 00:22:26.954838   30437 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1101 00:22:26.954842   30437 command_runner.go:130] > # ctr_stop_timeout = 30
	I1101 00:22:26.954850   30437 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1101 00:22:26.954856   30437 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1101 00:22:26.954864   30437 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1101 00:22:26.954869   30437 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1101 00:22:26.954879   30437 command_runner.go:130] > drop_infra_ctr = false
	I1101 00:22:26.954888   30437 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1101 00:22:26.954894   30437 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1101 00:22:26.954903   30437 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1101 00:22:26.954910   30437 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1101 00:22:26.954918   30437 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1101 00:22:26.954923   30437 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1101 00:22:26.954930   30437 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1101 00:22:26.954937   30437 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1101 00:22:26.954944   30437 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1101 00:22:26.954951   30437 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1101 00:22:26.954959   30437 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1101 00:22:26.954968   30437 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1101 00:22:26.954972   30437 command_runner.go:130] > # default_runtime = "runc"
	I1101 00:22:26.954980   30437 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1101 00:22:26.954988   30437 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1101 00:22:26.954996   30437 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1101 00:22:26.955004   30437 command_runner.go:130] > # creation as a file is not desired either.
	I1101 00:22:26.955012   30437 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1101 00:22:26.955019   30437 command_runner.go:130] > # the hostname is being managed dynamically.
	I1101 00:22:26.955024   30437 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1101 00:22:26.955030   30437 command_runner.go:130] > # ]
	I1101 00:22:26.955037   30437 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1101 00:22:26.955044   30437 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1101 00:22:26.955050   30437 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1101 00:22:26.955056   30437 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1101 00:22:26.955066   30437 command_runner.go:130] > #
	I1101 00:22:26.955070   30437 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1101 00:22:26.955075   30437 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1101 00:22:26.955079   30437 command_runner.go:130] > #  runtime_type = "oci"
	I1101 00:22:26.955083   30437 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1101 00:22:26.955092   30437 command_runner.go:130] > #  privileged_without_host_devices = false
	I1101 00:22:26.955096   30437 command_runner.go:130] > #  allowed_annotations = []
	I1101 00:22:26.955100   30437 command_runner.go:130] > # Where:
	I1101 00:22:26.955105   30437 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1101 00:22:26.955111   30437 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1101 00:22:26.955118   30437 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1101 00:22:26.955124   30437 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1101 00:22:26.955129   30437 command_runner.go:130] > #   in $PATH.
	I1101 00:22:26.955135   30437 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1101 00:22:26.955141   30437 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1101 00:22:26.955147   30437 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1101 00:22:26.955154   30437 command_runner.go:130] > #   state.
	I1101 00:22:26.955160   30437 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1101 00:22:26.955169   30437 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1101 00:22:26.955177   30437 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1101 00:22:26.955183   30437 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1101 00:22:26.955191   30437 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1101 00:22:26.955198   30437 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1101 00:22:26.955206   30437 command_runner.go:130] > #   The currently recognized values are:
	I1101 00:22:26.955212   30437 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1101 00:22:26.955222   30437 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1101 00:22:26.955228   30437 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1101 00:22:26.955237   30437 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1101 00:22:26.955245   30437 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1101 00:22:26.955253   30437 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1101 00:22:26.955260   30437 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1101 00:22:26.955269   30437 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1101 00:22:26.955274   30437 command_runner.go:130] > #   should be moved to the container's cgroup
	I1101 00:22:26.955281   30437 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1101 00:22:26.955286   30437 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1101 00:22:26.955291   30437 command_runner.go:130] > runtime_type = "oci"
	I1101 00:22:26.955295   30437 command_runner.go:130] > runtime_root = "/run/runc"
	I1101 00:22:26.955300   30437 command_runner.go:130] > runtime_config_path = ""
	I1101 00:22:26.955304   30437 command_runner.go:130] > monitor_path = ""
	I1101 00:22:26.955309   30437 command_runner.go:130] > monitor_cgroup = ""
	I1101 00:22:26.955313   30437 command_runner.go:130] > monitor_exec_cgroup = ""
	I1101 00:22:26.955322   30437 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1101 00:22:26.955326   30437 command_runner.go:130] > # running containers
	I1101 00:22:26.955331   30437 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1101 00:22:26.955340   30437 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1101 00:22:26.955372   30437 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1101 00:22:26.955382   30437 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1101 00:22:26.955387   30437 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1101 00:22:26.955391   30437 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1101 00:22:26.955395   30437 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1101 00:22:26.955401   30437 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1101 00:22:26.955409   30437 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1101 00:22:26.955413   30437 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1101 00:22:26.955420   30437 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1101 00:22:26.955428   30437 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1101 00:22:26.955435   30437 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1101 00:22:26.955445   30437 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1101 00:22:26.955452   30437 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1101 00:22:26.955460   30437 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1101 00:22:26.955469   30437 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1101 00:22:26.955479   30437 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1101 00:22:26.955487   30437 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1101 00:22:26.955494   30437 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1101 00:22:26.955501   30437 command_runner.go:130] > # Example:
	I1101 00:22:26.955506   30437 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1101 00:22:26.955513   30437 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1101 00:22:26.955518   30437 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1101 00:22:26.955526   30437 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1101 00:22:26.955530   30437 command_runner.go:130] > # cpuset = 0
	I1101 00:22:26.955535   30437 command_runner.go:130] > # cpushares = "0-1"
	I1101 00:22:26.955538   30437 command_runner.go:130] > # Where:
	I1101 00:22:26.955547   30437 command_runner.go:130] > # The workload name is workload-type.
	I1101 00:22:26.955554   30437 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1101 00:22:26.955559   30437 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1101 00:22:26.955565   30437 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1101 00:22:26.955579   30437 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1101 00:22:26.955592   30437 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1101 00:22:26.955600   30437 command_runner.go:130] > # 
	I1101 00:22:26.955611   30437 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1101 00:22:26.955620   30437 command_runner.go:130] > #
	I1101 00:22:26.955629   30437 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1101 00:22:26.955642   30437 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1101 00:22:26.955655   30437 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1101 00:22:26.955669   30437 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1101 00:22:26.955679   30437 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1101 00:22:26.955683   30437 command_runner.go:130] > [crio.image]
	I1101 00:22:26.955692   30437 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1101 00:22:26.955699   30437 command_runner.go:130] > # default_transport = "docker://"
	I1101 00:22:26.955706   30437 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1101 00:22:26.955714   30437 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1101 00:22:26.955719   30437 command_runner.go:130] > # global_auth_file = ""
	I1101 00:22:26.955724   30437 command_runner.go:130] > # The image used to instantiate infra containers.
	I1101 00:22:26.955732   30437 command_runner.go:130] > # This option supports live configuration reload.
	I1101 00:22:26.955737   30437 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1101 00:22:26.955746   30437 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1101 00:22:26.955752   30437 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1101 00:22:26.955760   30437 command_runner.go:130] > # This option supports live configuration reload.
	I1101 00:22:26.955765   30437 command_runner.go:130] > # pause_image_auth_file = ""
	I1101 00:22:26.955771   30437 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1101 00:22:26.955779   30437 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1101 00:22:26.955786   30437 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1101 00:22:26.955794   30437 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1101 00:22:26.955826   30437 command_runner.go:130] > # pause_command = "/pause"
	I1101 00:22:26.955841   30437 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1101 00:22:26.955848   30437 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1101 00:22:26.955856   30437 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1101 00:22:26.955863   30437 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1101 00:22:26.955871   30437 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1101 00:22:26.955875   30437 command_runner.go:130] > # signature_policy = ""
	I1101 00:22:26.955882   30437 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1101 00:22:26.955890   30437 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1101 00:22:26.955897   30437 command_runner.go:130] > # changing them here.
	I1101 00:22:26.955901   30437 command_runner.go:130] > # insecure_registries = [
	I1101 00:22:26.955907   30437 command_runner.go:130] > # ]
	I1101 00:22:26.955919   30437 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1101 00:22:26.955926   30437 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1101 00:22:26.955950   30437 command_runner.go:130] > # image_volumes = "mkdir"
	I1101 00:22:26.955961   30437 command_runner.go:130] > # Temporary directory to use for storing big files
	I1101 00:22:26.955971   30437 command_runner.go:130] > # big_files_temporary_dir = ""
	I1101 00:22:26.955981   30437 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1101 00:22:26.955988   30437 command_runner.go:130] > # CNI plugins.
	I1101 00:22:26.955992   30437 command_runner.go:130] > [crio.network]
	I1101 00:22:26.956002   30437 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1101 00:22:26.956008   30437 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1101 00:22:26.956014   30437 command_runner.go:130] > # cni_default_network = ""
	I1101 00:22:26.956020   30437 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1101 00:22:26.956028   30437 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1101 00:22:26.956034   30437 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1101 00:22:26.956040   30437 command_runner.go:130] > # plugin_dirs = [
	I1101 00:22:26.956044   30437 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1101 00:22:26.956048   30437 command_runner.go:130] > # ]
	I1101 00:22:26.956054   30437 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1101 00:22:26.956060   30437 command_runner.go:130] > [crio.metrics]
	I1101 00:22:26.956065   30437 command_runner.go:130] > # Globally enable or disable metrics support.
	I1101 00:22:26.956070   30437 command_runner.go:130] > enable_metrics = true
	I1101 00:22:26.956075   30437 command_runner.go:130] > # Specify enabled metrics collectors.
	I1101 00:22:26.956082   30437 command_runner.go:130] > # Per default all metrics are enabled.
	I1101 00:22:26.956093   30437 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1101 00:22:26.956101   30437 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1101 00:22:26.956107   30437 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1101 00:22:26.956114   30437 command_runner.go:130] > # metrics_collectors = [
	I1101 00:22:26.956118   30437 command_runner.go:130] > # 	"operations",
	I1101 00:22:26.956126   30437 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1101 00:22:26.956131   30437 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1101 00:22:26.956137   30437 command_runner.go:130] > # 	"operations_errors",
	I1101 00:22:26.956141   30437 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1101 00:22:26.956146   30437 command_runner.go:130] > # 	"image_pulls_by_name",
	I1101 00:22:26.956151   30437 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1101 00:22:26.956157   30437 command_runner.go:130] > # 	"image_pulls_failures",
	I1101 00:22:26.956161   30437 command_runner.go:130] > # 	"image_pulls_successes",
	I1101 00:22:26.956166   30437 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1101 00:22:26.956172   30437 command_runner.go:130] > # 	"image_layer_reuse",
	I1101 00:22:26.956177   30437 command_runner.go:130] > # 	"containers_oom_total",
	I1101 00:22:26.956183   30437 command_runner.go:130] > # 	"containers_oom",
	I1101 00:22:26.956187   30437 command_runner.go:130] > # 	"processes_defunct",
	I1101 00:22:26.956191   30437 command_runner.go:130] > # 	"operations_total",
	I1101 00:22:26.956195   30437 command_runner.go:130] > # 	"operations_latency_seconds",
	I1101 00:22:26.956203   30437 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1101 00:22:26.956208   30437 command_runner.go:130] > # 	"operations_errors_total",
	I1101 00:22:26.956215   30437 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1101 00:22:26.956221   30437 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1101 00:22:26.956228   30437 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1101 00:22:26.956232   30437 command_runner.go:130] > # 	"image_pulls_success_total",
	I1101 00:22:26.956238   30437 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1101 00:22:26.956243   30437 command_runner.go:130] > # 	"containers_oom_count_total",
	I1101 00:22:26.956246   30437 command_runner.go:130] > # ]
	I1101 00:22:26.956251   30437 command_runner.go:130] > # The port on which the metrics server will listen.
	I1101 00:22:26.956258   30437 command_runner.go:130] > # metrics_port = 9090
	I1101 00:22:26.956263   30437 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1101 00:22:26.956270   30437 command_runner.go:130] > # metrics_socket = ""
	I1101 00:22:26.956275   30437 command_runner.go:130] > # The certificate for the secure metrics server.
	I1101 00:22:26.956283   30437 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1101 00:22:26.956289   30437 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1101 00:22:26.956297   30437 command_runner.go:130] > # certificate on any modification event.
	I1101 00:22:26.956301   30437 command_runner.go:130] > # metrics_cert = ""
	I1101 00:22:26.956308   30437 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1101 00:22:26.956313   30437 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1101 00:22:26.956320   30437 command_runner.go:130] > # metrics_key = ""
	I1101 00:22:26.956326   30437 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1101 00:22:26.956332   30437 command_runner.go:130] > [crio.tracing]
	I1101 00:22:26.956338   30437 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1101 00:22:26.956345   30437 command_runner.go:130] > # enable_tracing = false
	I1101 00:22:26.956350   30437 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1101 00:22:26.956355   30437 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1101 00:22:26.956363   30437 command_runner.go:130] > # Number of samples to collect per million spans.
	I1101 00:22:26.956367   30437 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1101 00:22:26.956375   30437 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1101 00:22:26.956379   30437 command_runner.go:130] > [crio.stats]
	I1101 00:22:26.956387   30437 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1101 00:22:26.956393   30437 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1101 00:22:26.956399   30437 command_runner.go:130] > # stats_collection_period = 0
	I1101 00:22:26.956458   30437 cni.go:84] Creating CNI manager for ""
	I1101 00:22:26.956466   30437 cni.go:136] 3 nodes found, recommending kindnet
	I1101 00:22:26.956474   30437 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 00:22:26.956493   30437 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-600483 NodeName:multinode-600483-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.130"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 00:22:26.956626   30437 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-600483-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.130"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 00:22:26.956691   30437 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-600483-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-600483 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 00:22:26.956747   30437 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 00:22:26.967137   30437 command_runner.go:130] > kubeadm
	I1101 00:22:26.967165   30437 command_runner.go:130] > kubectl
	I1101 00:22:26.967172   30437 command_runner.go:130] > kubelet
	I1101 00:22:26.967228   30437 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 00:22:26.967291   30437 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1101 00:22:26.976223   30437 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1101 00:22:26.993875   30437 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 00:22:27.010012   30437 ssh_runner.go:195] Run: grep 192.168.39.130	control-plane.minikube.internal$ /etc/hosts
	I1101 00:22:27.014000   30437 command_runner.go:130] > 192.168.39.130	control-plane.minikube.internal
	I1101 00:22:27.014059   30437 host.go:66] Checking if "multinode-600483" exists ...
	I1101 00:22:27.014281   30437 config.go:182] Loaded profile config "multinode-600483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:22:27.014503   30437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1101 00:22:27.014542   30437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:22:27.030654   30437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41107
	I1101 00:22:27.031065   30437 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:22:27.031504   30437 main.go:141] libmachine: Using API Version  1
	I1101 00:22:27.031520   30437 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:22:27.031826   30437 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:22:27.032046   30437 main.go:141] libmachine: (multinode-600483) Calling .DriverName
	I1101 00:22:27.032212   30437 start.go:304] JoinCluster: &{Name:multinode-600483 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.3 ClusterName:multinode-600483 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.130 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.109 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.2 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:22:27.032354   30437 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1101 00:22:27.032373   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHHostname
	I1101 00:22:27.036150   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:22:27.036717   30437 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:22:27.036751   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:22:27.036936   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHPort
	I1101 00:22:27.037170   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:22:27.037347   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHUsername
	I1101 00:22:27.037506   30437 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483/id_rsa Username:docker}
	I1101 00:22:27.220247   30437 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 4my5pm.vkzbkp6aub2uinsw --discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 
	I1101 00:22:27.226576   30437 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.2 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime: ControlPlane:false Worker:true}
	I1101 00:22:27.226615   30437 host.go:66] Checking if "multinode-600483" exists ...
	I1101 00:22:27.227016   30437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1101 00:22:27.227059   30437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:22:27.242391   30437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46175
	I1101 00:22:27.242811   30437 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:22:27.243267   30437 main.go:141] libmachine: Using API Version  1
	I1101 00:22:27.243283   30437 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:22:27.243607   30437 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:22:27.243788   30437 main.go:141] libmachine: (multinode-600483) Calling .DriverName
	I1101 00:22:27.243989   30437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl drain multinode-600483-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1101 00:22:27.244010   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHHostname
	I1101 00:22:27.246697   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:22:27.247143   30437 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:22:27.247175   30437 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:22:27.247370   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHPort
	I1101 00:22:27.247575   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:22:27.247746   30437 main.go:141] libmachine: (multinode-600483) Calling .GetSSHUsername
	I1101 00:22:27.247896   30437 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483/id_rsa Username:docker}
	I1101 00:22:27.441197   30437 command_runner.go:130] > node/multinode-600483-m03 cordoned
	I1101 00:22:30.483360   30437 command_runner.go:130] > pod "busybox-5bc68d56bd-nsjs7" has DeletionTimestamp older than 1 seconds, skipping
	I1101 00:22:30.483388   30437 command_runner.go:130] > node/multinode-600483-m03 drained
	I1101 00:22:30.485077   30437 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1101 00:22:30.485106   30437 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-ldrkn, kube-system/kube-proxy-84g2n
	I1101 00:22:30.485138   30437 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl drain multinode-600483-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.241123882s)
	I1101 00:22:30.485165   30437 node.go:108] successfully drained node "m03"
	I1101 00:22:30.485476   30437 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 00:22:30.485671   30437 kapi.go:59] client config for multinode-600483: &rest.Config{Host:"https://192.168.39.130:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/client.crt", KeyFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/client.key", CAFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 00:22:30.485897   30437 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1101 00:22:30.485938   30437 round_trippers.go:463] DELETE https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m03
	I1101 00:22:30.485948   30437 round_trippers.go:469] Request Headers:
	I1101 00:22:30.485958   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:22:30.485971   30437 round_trippers.go:473]     Content-Type: application/json
	I1101 00:22:30.485979   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:22:30.504545   30437 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I1101 00:22:30.504567   30437 round_trippers.go:577] Response Headers:
	I1101 00:22:30.504574   30437 round_trippers.go:580]     Audit-Id: 23584a0d-36ed-44f7-b92b-4b8b876f7efc
	I1101 00:22:30.504579   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:22:30.504584   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:22:30.504590   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:22:30.504598   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:22:30.504606   30437 round_trippers.go:580]     Content-Length: 171
	I1101 00:22:30.504615   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:22:30 GMT
	I1101 00:22:30.504639   30437 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-600483-m03","kind":"nodes","uid":"5050dc91-014d-4a1c-b839-f60403866911"}}
	I1101 00:22:30.504681   30437 node.go:124] successfully deleted node "m03"
	I1101 00:22:30.504697   30437 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.2 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime: ControlPlane:false Worker:true}
	I1101 00:22:30.504722   30437 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.2 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime: ControlPlane:false Worker:true}
	I1101 00:22:30.504744   30437 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4my5pm.vkzbkp6aub2uinsw --discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-600483-m03"
	I1101 00:22:30.565244   30437 command_runner.go:130] > [preflight] Running pre-flight checks
	I1101 00:22:30.731506   30437 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1101 00:22:30.731530   30437 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1101 00:22:30.802516   30437 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 00:22:30.802634   30437 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 00:22:30.802840   30437 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1101 00:22:30.935723   30437 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1101 00:22:31.463727   30437 command_runner.go:130] > This node has joined the cluster:
	I1101 00:22:31.463755   30437 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1101 00:22:31.463766   30437 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1101 00:22:31.463776   30437 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1101 00:22:31.466371   30437 command_runner.go:130] ! W1101 00:22:30.557184    2333 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1101 00:22:31.466404   30437 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1101 00:22:31.466415   30437 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1101 00:22:31.466428   30437 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1101 00:22:31.466452   30437 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1101 00:22:31.709546   30437 start.go:306] JoinCluster complete in 4.677330418s
	I1101 00:22:31.709568   30437 cni.go:84] Creating CNI manager for ""
	I1101 00:22:31.709573   30437 cni.go:136] 3 nodes found, recommending kindnet
	I1101 00:22:31.709616   30437 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 00:22:31.715270   30437 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1101 00:22:31.715293   30437 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1101 00:22:31.715303   30437 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1101 00:22:31.715314   30437 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1101 00:22:31.715323   30437 command_runner.go:130] > Access: 2023-11-01 00:18:23.441951890 +0000
	I1101 00:22:31.715331   30437 command_runner.go:130] > Modify: 2023-10-31 23:04:20.000000000 +0000
	I1101 00:22:31.715341   30437 command_runner.go:130] > Change: 2023-11-01 00:18:21.588951890 +0000
	I1101 00:22:31.715353   30437 command_runner.go:130] >  Birth: -
	I1101 00:22:31.715415   30437 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1101 00:22:31.715426   30437 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1101 00:22:31.734555   30437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 00:22:32.040317   30437 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1101 00:22:32.044863   30437 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1101 00:22:32.050472   30437 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1101 00:22:32.081935   30437 command_runner.go:130] > daemonset.apps/kindnet configured
	I1101 00:22:32.084705   30437 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 00:22:32.084945   30437 kapi.go:59] client config for multinode-600483: &rest.Config{Host:"https://192.168.39.130:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/client.crt", KeyFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/client.key", CAFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 00:22:32.085253   30437 round_trippers.go:463] GET https://192.168.39.130:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1101 00:22:32.085266   30437 round_trippers.go:469] Request Headers:
	I1101 00:22:32.085273   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:22:32.085279   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:22:32.090822   30437 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1101 00:22:32.090845   30437 round_trippers.go:577] Response Headers:
	I1101 00:22:32.090854   30437 round_trippers.go:580]     Audit-Id: f7343b5e-569a-459b-85c6-c0f90669b259
	I1101 00:22:32.090863   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:22:32.090871   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:22:32.090879   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:22:32.090885   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:22:32.090891   30437 round_trippers.go:580]     Content-Length: 291
	I1101 00:22:32.090896   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:22:32 GMT
	I1101 00:22:32.090918   30437 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"21004493-8bb6-43e9-8ba2-65d98d570b24","resourceVersion":"848","creationTimestamp":"2023-11-01T00:08:30Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1101 00:22:32.091019   30437 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-600483" context rescaled to 1 replicas
	I1101 00:22:32.091051   30437 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.2 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime: ControlPlane:false Worker:true}
	I1101 00:22:32.093525   30437 out.go:177] * Verifying Kubernetes components...
	I1101 00:22:32.094770   30437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 00:22:32.118556   30437 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 00:22:32.118765   30437 kapi.go:59] client config for multinode-600483: &rest.Config{Host:"https://192.168.39.130:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/client.crt", KeyFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/profiles/multinode-600483/client.key", CAFile:"/home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 00:22:32.119081   30437 node_ready.go:35] waiting up to 6m0s for node "multinode-600483-m03" to be "Ready" ...
	I1101 00:22:32.119166   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m03
	I1101 00:22:32.119177   30437 round_trippers.go:469] Request Headers:
	I1101 00:22:32.119188   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:22:32.119198   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:22:32.122202   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:22:32.122222   30437 round_trippers.go:577] Response Headers:
	I1101 00:22:32.122230   30437 round_trippers.go:580]     Audit-Id: 81d73b97-f155-431f-9733-0c879436dea7
	I1101 00:22:32.122237   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:22:32.122246   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:22:32.122254   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:22:32.122264   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:22:32.122276   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:22:32 GMT
	I1101 00:22:32.122429   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483-m03","uid":"5f6aa7b7-0b01-4a29-91df-1e0d8ef97385","resourceVersion":"1170","creationTimestamp":"2023-11-01T00:22:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:22:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:22:31Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3440 chars]
	I1101 00:22:32.122670   30437 node_ready.go:49] node "multinode-600483-m03" has status "Ready":"True"
	I1101 00:22:32.122681   30437 node_ready.go:38] duration metric: took 3.574736ms waiting for node "multinode-600483-m03" to be "Ready" ...
	I1101 00:22:32.122691   30437 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 00:22:32.122743   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods
	I1101 00:22:32.122750   30437 round_trippers.go:469] Request Headers:
	I1101 00:22:32.122757   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:22:32.122765   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:22:32.127161   30437 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1101 00:22:32.127183   30437 round_trippers.go:577] Response Headers:
	I1101 00:22:32.127193   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:22:32 GMT
	I1101 00:22:32.127203   30437 round_trippers.go:580]     Audit-Id: 1615b41d-b2d0-4d2c-b45e-a7ee5df214ff
	I1101 00:22:32.127210   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:22:32.127215   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:22:32.127220   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:22:32.127226   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:22:32.129645   30437 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1177"},"items":[{"metadata":{"name":"coredns-5dd5756b68-rpvvn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d8ab0ebb-aa1f-4143-b987-6c1ae065954a","resourceVersion":"833","creationTimestamp":"2023-11-01T00:08:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15779dee-f1e7-4836-aba2-2d57728c2309","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15779dee-f1e7-4836-aba2-2d57728c2309\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82225 chars]
	I1101 00:22:32.132138   30437 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rpvvn" in "kube-system" namespace to be "Ready" ...
	I1101 00:22:32.132203   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rpvvn
	I1101 00:22:32.132214   30437 round_trippers.go:469] Request Headers:
	I1101 00:22:32.132221   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:22:32.132227   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:22:32.134612   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:22:32.134632   30437 round_trippers.go:577] Response Headers:
	I1101 00:22:32.134642   30437 round_trippers.go:580]     Audit-Id: 1522334b-c003-4807-88e4-48ac39ede19c
	I1101 00:22:32.134652   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:22:32.134660   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:22:32.134665   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:22:32.134670   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:22:32.134675   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:22:32 GMT
	I1101 00:22:32.134984   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rpvvn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d8ab0ebb-aa1f-4143-b987-6c1ae065954a","resourceVersion":"833","creationTimestamp":"2023-11-01T00:08:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15779dee-f1e7-4836-aba2-2d57728c2309","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15779dee-f1e7-4836-aba2-2d57728c2309\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1101 00:22:32.135408   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:22:32.135422   30437 round_trippers.go:469] Request Headers:
	I1101 00:22:32.135429   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:22:32.135435   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:22:32.137497   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:22:32.137520   30437 round_trippers.go:577] Response Headers:
	I1101 00:22:32.137529   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:22:32 GMT
	I1101 00:22:32.137538   30437 round_trippers.go:580]     Audit-Id: fd3d66aa-5004-432a-abc0-eda9e5249f28
	I1101 00:22:32.137546   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:22:32.137553   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:22:32.137567   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:22:32.137572   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:22:32.137967   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"865","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 6220 chars]
	I1101 00:22:32.138235   30437 pod_ready.go:92] pod "coredns-5dd5756b68-rpvvn" in "kube-system" namespace has status "Ready":"True"
	I1101 00:22:32.138247   30437 pod_ready.go:81] duration metric: took 6.090562ms waiting for pod "coredns-5dd5756b68-rpvvn" in "kube-system" namespace to be "Ready" ...
	I1101 00:22:32.138254   30437 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:22:32.138299   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-600483
	I1101 00:22:32.138307   30437 round_trippers.go:469] Request Headers:
	I1101 00:22:32.138315   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:22:32.138321   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:22:32.140611   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:22:32.140631   30437 round_trippers.go:577] Response Headers:
	I1101 00:22:32.140640   30437 round_trippers.go:580]     Audit-Id: 8db5180f-842f-460a-923b-bdc42bb08614
	I1101 00:22:32.140648   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:22:32.140655   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:22:32.140665   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:22:32.140672   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:22:32.140678   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:22:32 GMT
	I1101 00:22:32.140898   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-600483","namespace":"kube-system","uid":"c612ebac-fa1d-474a-b8cd-5e922a5f76dd","resourceVersion":"827","creationTimestamp":"2023-11-01T00:08:30Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.130:2379","kubernetes.io/config.hash":"5629fb0a0414e85632f97c416152ffbb","kubernetes.io/config.mirror":"5629fb0a0414e85632f97c416152ffbb","kubernetes.io/config.seen":"2023-11-01T00:08:30.293496672Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1101 00:22:32.141215   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:22:32.141226   30437 round_trippers.go:469] Request Headers:
	I1101 00:22:32.141232   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:22:32.141238   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:22:32.143323   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:22:32.143338   30437 round_trippers.go:577] Response Headers:
	I1101 00:22:32.143346   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:22:32.143355   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:22:32.143364   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:22:32.143374   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:22:32.143381   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:22:32 GMT
	I1101 00:22:32.143387   30437 round_trippers.go:580]     Audit-Id: 377f1e24-693a-448b-bb6d-5fcbd5713b86
	I1101 00:22:32.143580   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"865","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 6220 chars]
	I1101 00:22:32.143883   30437 pod_ready.go:92] pod "etcd-multinode-600483" in "kube-system" namespace has status "Ready":"True"
	I1101 00:22:32.143898   30437 pod_ready.go:81] duration metric: took 5.637859ms waiting for pod "etcd-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:22:32.143914   30437 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:22:32.143975   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-600483
	I1101 00:22:32.143981   30437 round_trippers.go:469] Request Headers:
	I1101 00:22:32.143988   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:22:32.143997   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:22:32.145800   30437 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1101 00:22:32.145814   30437 round_trippers.go:577] Response Headers:
	I1101 00:22:32.145824   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:22:32.145833   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:22:32.145842   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:22:32.145852   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:22:32.145858   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:22:32 GMT
	I1101 00:22:32.145863   30437 round_trippers.go:580]     Audit-Id: 5b6c8a44-3360-441e-8c67-85c6630279db
	I1101 00:22:32.145994   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-600483","namespace":"kube-system","uid":"bd94a63a-62c2-4654-aaf0-2e9df086b168","resourceVersion":"843","creationTimestamp":"2023-11-01T00:08:30Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.130:8443","kubernetes.io/config.hash":"99a9cda13526c350638742a7c7b2ba52","kubernetes.io/config.mirror":"99a9cda13526c350638742a7c7b2ba52","kubernetes.io/config.seen":"2023-11-01T00:08:30.293497612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1101 00:22:32.146363   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:22:32.146377   30437 round_trippers.go:469] Request Headers:
	I1101 00:22:32.146384   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:22:32.146390   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:22:32.150956   30437 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1101 00:22:32.150973   30437 round_trippers.go:577] Response Headers:
	I1101 00:22:32.150980   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:22:32 GMT
	I1101 00:22:32.150985   30437 round_trippers.go:580]     Audit-Id: c3d952e8-5cd0-44d9-8b67-d5c2c18d5e57
	I1101 00:22:32.150990   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:22:32.150995   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:22:32.151000   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:22:32.151005   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:22:32.151701   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"865","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 6220 chars]
	I1101 00:22:32.152083   30437 pod_ready.go:92] pod "kube-apiserver-multinode-600483" in "kube-system" namespace has status "Ready":"True"
	I1101 00:22:32.152100   30437 pod_ready.go:81] duration metric: took 8.179366ms waiting for pod "kube-apiserver-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:22:32.152112   30437 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:22:32.152178   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-600483
	I1101 00:22:32.152187   30437 round_trippers.go:469] Request Headers:
	I1101 00:22:32.152195   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:22:32.152204   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:22:32.154870   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:22:32.154890   30437 round_trippers.go:577] Response Headers:
	I1101 00:22:32.154897   30437 round_trippers.go:580]     Audit-Id: 62750823-ff58-4190-bf92-e61155c377ed
	I1101 00:22:32.154902   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:22:32.154907   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:22:32.154912   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:22:32.154918   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:22:32.154926   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:22:32 GMT
	I1101 00:22:32.155155   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-600483","namespace":"kube-system","uid":"9dd41877-c6ea-4591-90e1-632a234ffcf6","resourceVersion":"845","creationTimestamp":"2023-11-01T00:08:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f2b1fcba8b34b1f65e600fae0bd4374a","kubernetes.io/config.mirror":"f2b1fcba8b34b1f65e600fae0bd4374a","kubernetes.io/config.seen":"2023-11-01T00:08:20.448799328Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1101 00:22:32.155502   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:22:32.155512   30437 round_trippers.go:469] Request Headers:
	I1101 00:22:32.155519   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:22:32.155526   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:22:32.158268   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:22:32.158288   30437 round_trippers.go:577] Response Headers:
	I1101 00:22:32.158296   30437 round_trippers.go:580]     Audit-Id: 4faccdf4-365c-4881-8ef5-6aeb3b397e96
	I1101 00:22:32.158305   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:22:32.158312   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:22:32.158320   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:22:32.158328   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:22:32.158336   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:22:32 GMT
	I1101 00:22:32.159323   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"865","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 6220 chars]
	I1101 00:22:32.159605   30437 pod_ready.go:92] pod "kube-controller-manager-multinode-600483" in "kube-system" namespace has status "Ready":"True"
	I1101 00:22:32.159621   30437 pod_ready.go:81] duration metric: took 7.500325ms waiting for pod "kube-controller-manager-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:22:32.159634   30437 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7kvtf" in "kube-system" namespace to be "Ready" ...
	I1101 00:22:32.320021   30437 request.go:629] Waited for 160.329705ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7kvtf
	I1101 00:22:32.320088   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7kvtf
	I1101 00:22:32.320095   30437 round_trippers.go:469] Request Headers:
	I1101 00:22:32.320103   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:22:32.320112   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:22:32.323847   30437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:22:32.323865   30437 round_trippers.go:577] Response Headers:
	I1101 00:22:32.323872   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:22:32.323878   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:22:32 GMT
	I1101 00:22:32.323882   30437 round_trippers.go:580]     Audit-Id: 80946c5f-3c11-41b1-b03c-8682f1547028
	I1101 00:22:32.323888   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:22:32.323897   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:22:32.323905   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:22:32.324313   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7kvtf","generateName":"kube-proxy-","namespace":"kube-system","uid":"e2101b7f-e517-4100-905d-f46517e68255","resourceVersion":"983","creationTimestamp":"2023-11-01T00:09:23Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2d674cb3-a003-4ca9-a8b5-a283ae64b7c6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:09:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d674cb3-a003-4ca9-a8b5-a283ae64b7c6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5729 chars]
	I1101 00:22:32.520083   30437 request.go:629] Waited for 195.37185ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m02
	I1101 00:22:32.520161   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m02
	I1101 00:22:32.520174   30437 round_trippers.go:469] Request Headers:
	I1101 00:22:32.520185   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:22:32.520195   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:22:32.523088   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:22:32.523104   30437 round_trippers.go:577] Response Headers:
	I1101 00:22:32.523110   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:22:32.523116   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:22:32.523125   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:22:32 GMT
	I1101 00:22:32.523130   30437 round_trippers.go:580]     Audit-Id: 2ac0b96c-d67e-4665-8410-b4d0c75e1814
	I1101 00:22:32.523135   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:22:32.523140   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:22:32.523397   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483-m02","uid":"36dbbc60-53e2-44a7-8be1-589b70b73c26","resourceVersion":"1012","creationTimestamp":"2023-11-01T00:20:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:20:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}
}}}},{"manager":"kube-controller-manager","operation":"Update","apiVers [truncated 3671 chars]
	I1101 00:22:32.523671   30437 pod_ready.go:92] pod "kube-proxy-7kvtf" in "kube-system" namespace has status "Ready":"True"
	I1101 00:22:32.523688   30437 pod_ready.go:81] duration metric: took 364.04583ms waiting for pod "kube-proxy-7kvtf" in "kube-system" namespace to be "Ready" ...
	I1101 00:22:32.523698   30437 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-84g2n" in "kube-system" namespace to be "Ready" ...
	I1101 00:22:32.720092   30437 request.go:629] Waited for 196.338603ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-proxy-84g2n
	I1101 00:22:32.720149   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-proxy-84g2n
	I1101 00:22:32.720155   30437 round_trippers.go:469] Request Headers:
	I1101 00:22:32.720168   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:22:32.720180   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:22:32.723726   30437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:22:32.723753   30437 round_trippers.go:577] Response Headers:
	I1101 00:22:32.723763   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:22:32.723772   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:22:32.723781   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:22:32.723788   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:22:32.723794   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:22:32 GMT
	I1101 00:22:32.723802   30437 round_trippers.go:580]     Audit-Id: 13f8020b-8d1a-4bfe-aac7-38eee1893426
	I1101 00:22:32.724210   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-84g2n","generateName":"kube-proxy-","namespace":"kube-system","uid":"a98efae3-9303-43be-a139-d21a5630c6b8","resourceVersion":"1174","creationTimestamp":"2023-11-01T00:10:15Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2d674cb3-a003-4ca9-a8b5-a283ae64b7c6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:10:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d674cb3-a003-4ca9-a8b5-a283ae64b7c6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5878 chars]
	I1101 00:22:32.920132   30437 request.go:629] Waited for 195.384887ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m03
	I1101 00:22:32.920189   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m03
	I1101 00:22:32.920195   30437 round_trippers.go:469] Request Headers:
	I1101 00:22:32.920202   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:22:32.920208   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:22:32.923496   30437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:22:32.923520   30437 round_trippers.go:577] Response Headers:
	I1101 00:22:32.923531   30437 round_trippers.go:580]     Audit-Id: b1656ee8-11b8-4cb3-bd19-b4ff471f19e8
	I1101 00:22:32.923540   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:22:32.923549   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:22:32.923556   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:22:32.923564   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:22:32.923572   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:22:32 GMT
	I1101 00:22:32.923789   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483-m03","uid":"5f6aa7b7-0b01-4a29-91df-1e0d8ef97385","resourceVersion":"1170","creationTimestamp":"2023-11-01T00:22:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:22:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:22:31Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3440 chars]
	I1101 00:22:33.120033   30437 request.go:629] Waited for 195.8407ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-proxy-84g2n
	I1101 00:22:33.120098   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-proxy-84g2n
	I1101 00:22:33.120104   30437 round_trippers.go:469] Request Headers:
	I1101 00:22:33.120116   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:22:33.120127   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:22:33.124524   30437 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1101 00:22:33.124551   30437 round_trippers.go:577] Response Headers:
	I1101 00:22:33.124562   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:22:33.124571   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:22:33.124579   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:22:33 GMT
	I1101 00:22:33.124586   30437 round_trippers.go:580]     Audit-Id: 46f54389-0c57-45f1-98fa-cae4c9ba5179
	I1101 00:22:33.124598   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:22:33.124604   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:22:33.125771   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-84g2n","generateName":"kube-proxy-","namespace":"kube-system","uid":"a98efae3-9303-43be-a139-d21a5630c6b8","resourceVersion":"1188","creationTimestamp":"2023-11-01T00:10:15Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2d674cb3-a003-4ca9-a8b5-a283ae64b7c6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:10:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d674cb3-a003-4ca9-a8b5-a283ae64b7c6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5722 chars]
	I1101 00:22:33.319569   30437 request.go:629] Waited for 193.363922ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m03
	I1101 00:22:33.319637   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483-m03
	I1101 00:22:33.319645   30437 round_trippers.go:469] Request Headers:
	I1101 00:22:33.319653   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:22:33.319662   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:22:33.322501   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:22:33.322525   30437 round_trippers.go:577] Response Headers:
	I1101 00:22:33.322532   30437 round_trippers.go:580]     Audit-Id: 66498aba-ddd5-407a-8994-c4df58fd9b63
	I1101 00:22:33.322537   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:22:33.322542   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:22:33.322557   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:22:33.322562   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:22:33.322568   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:22:33 GMT
	I1101 00:22:33.322813   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483-m03","uid":"5f6aa7b7-0b01-4a29-91df-1e0d8ef97385","resourceVersion":"1170","creationTimestamp":"2023-11-01T00:22:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:22:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:22:31Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3440 chars]
	I1101 00:22:33.323128   30437 pod_ready.go:92] pod "kube-proxy-84g2n" in "kube-system" namespace has status "Ready":"True"
	I1101 00:22:33.323145   30437 pod_ready.go:81] duration metric: took 799.440499ms waiting for pod "kube-proxy-84g2n" in "kube-system" namespace to be "Ready" ...
	I1101 00:22:33.323157   30437 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tq28b" in "kube-system" namespace to be "Ready" ...
	I1101 00:22:33.519614   30437 request.go:629] Waited for 196.399432ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tq28b
	I1101 00:22:33.519672   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tq28b
	I1101 00:22:33.519677   30437 round_trippers.go:469] Request Headers:
	I1101 00:22:33.519685   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:22:33.519691   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:22:33.524347   30437 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1101 00:22:33.524372   30437 round_trippers.go:577] Response Headers:
	I1101 00:22:33.524379   30437 round_trippers.go:580]     Audit-Id: c2874749-ad5c-41ed-a2f8-122928a5f560
	I1101 00:22:33.524384   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:22:33.524391   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:22:33.524399   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:22:33.524408   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:22:33.524417   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:22:33 GMT
	I1101 00:22:33.526892   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tq28b","generateName":"kube-proxy-","namespace":"kube-system","uid":"9534d8b8-4536-4a0a-8af5-440e6871a85f","resourceVersion":"793","creationTimestamp":"2023-11-01T00:08:42Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2d674cb3-a003-4ca9-a8b5-a283ae64b7c6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d674cb3-a003-4ca9-a8b5-a283ae64b7c6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5517 chars]
	I1101 00:22:33.719683   30437 request.go:629] Waited for 192.378848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:22:33.719739   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:22:33.719744   30437 round_trippers.go:469] Request Headers:
	I1101 00:22:33.719751   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:22:33.719760   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:22:33.721938   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:22:33.721963   30437 round_trippers.go:577] Response Headers:
	I1101 00:22:33.721974   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:22:33.721981   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:22:33.721986   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:22:33 GMT
	I1101 00:22:33.721991   30437 round_trippers.go:580]     Audit-Id: 743ea667-6d23-4af4-b952-725096b4ee83
	I1101 00:22:33.721996   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:22:33.722001   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:22:33.722337   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"865","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 6220 chars]
	I1101 00:22:33.722741   30437 pod_ready.go:92] pod "kube-proxy-tq28b" in "kube-system" namespace has status "Ready":"True"
	I1101 00:22:33.722759   30437 pod_ready.go:81] duration metric: took 399.594127ms waiting for pod "kube-proxy-tq28b" in "kube-system" namespace to be "Ready" ...
	I1101 00:22:33.722771   30437 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:22:33.920136   30437 request.go:629] Waited for 197.306451ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-600483
	I1101 00:22:33.920215   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-600483
	I1101 00:22:33.920223   30437 round_trippers.go:469] Request Headers:
	I1101 00:22:33.920232   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:22:33.920242   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:22:33.923071   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:22:33.923091   30437 round_trippers.go:577] Response Headers:
	I1101 00:22:33.923101   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:22:33 GMT
	I1101 00:22:33.923110   30437 round_trippers.go:580]     Audit-Id: efe2f199-27ab-4502-bfde-07f39b3fb4f7
	I1101 00:22:33.923126   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:22:33.923135   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:22:33.923143   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:22:33.923148   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:22:33.923285   30437 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-600483","namespace":"kube-system","uid":"9cdd0be5-035a-49f5-8796-831ebde28bf0","resourceVersion":"826","creationTimestamp":"2023-11-01T00:08:30Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"01c4e8f68a00a3553dcff3388cb56149","kubernetes.io/config.mirror":"01c4e8f68a00a3553dcff3388cb56149","kubernetes.io/config.seen":"2023-11-01T00:08:30.293495470Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:08:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1101 00:22:34.120021   30437 request.go:629] Waited for 196.375245ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:22:34.120105   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes/multinode-600483
	I1101 00:22:34.120115   30437 round_trippers.go:469] Request Headers:
	I1101 00:22:34.120133   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:22:34.120147   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:22:34.122984   30437 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:22:34.123006   30437 round_trippers.go:577] Response Headers:
	I1101 00:22:34.123019   30437 round_trippers.go:580]     Audit-Id: 9c742970-8ed6-4e1d-94ce-d7dfccfdd002
	I1101 00:22:34.123027   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:22:34.123035   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:22:34.123044   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:22:34.123052   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:22:34.123062   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:22:34 GMT
	I1101 00:22:34.123181   30437 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"865","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T00:08:26Z","fieldsType":"FieldsV1 [truncated 6220 chars]
	I1101 00:22:34.123484   30437 pod_ready.go:92] pod "kube-scheduler-multinode-600483" in "kube-system" namespace has status "Ready":"True"
	I1101 00:22:34.123500   30437 pod_ready.go:81] duration metric: took 400.720171ms waiting for pod "kube-scheduler-multinode-600483" in "kube-system" namespace to be "Ready" ...
	I1101 00:22:34.123515   30437 pod_ready.go:38] duration metric: took 2.000812321s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 00:22:34.123533   30437 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 00:22:34.123597   30437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 00:22:34.137572   30437 system_svc.go:56] duration metric: took 14.033984ms WaitForService to wait for kubelet.
	I1101 00:22:34.137594   30437 kubeadm.go:581] duration metric: took 2.046515331s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1101 00:22:34.137610   30437 node_conditions.go:102] verifying NodePressure condition ...
	I1101 00:22:34.320008   30437 request.go:629] Waited for 182.328419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.130:8443/api/v1/nodes
	I1101 00:22:34.320072   30437 round_trippers.go:463] GET https://192.168.39.130:8443/api/v1/nodes
	I1101 00:22:34.320080   30437 round_trippers.go:469] Request Headers:
	I1101 00:22:34.320111   30437 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:22:34.320125   30437 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:22:34.323819   30437 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:22:34.323839   30437 round_trippers.go:577] Response Headers:
	I1101 00:22:34.323849   30437 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:22:34 GMT
	I1101 00:22:34.323856   30437 round_trippers.go:580]     Audit-Id: c8cc4343-32a3-45c5-8555-f4979fccb63f
	I1101 00:22:34.323864   30437 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:22:34.323871   30437 round_trippers.go:580]     Content-Type: application/json
	I1101 00:22:34.323880   30437 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4599f671-0414-4d8c-934f-dd32f9c90a5f
	I1101 00:22:34.323890   30437 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 28b6b2f1-021a-41bf-9f3c-caf166197059
	I1101 00:22:34.324466   30437 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1194"},"items":[{"metadata":{"name":"multinode-600483","uid":"335fe09d-5376-49ee-aab9-08de53fbf279","resourceVersion":"865","creationTimestamp":"2023-11-01T00:08:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600483","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-600483","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_08_31_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"manag
edFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1", [truncated 15369 chars]
	I1101 00:22:34.325001   30437 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 00:22:34.325018   30437 node_conditions.go:123] node cpu capacity is 2
	I1101 00:22:34.325027   30437 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 00:22:34.325031   30437 node_conditions.go:123] node cpu capacity is 2
	I1101 00:22:34.325034   30437 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 00:22:34.325038   30437 node_conditions.go:123] node cpu capacity is 2
	I1101 00:22:34.325042   30437 node_conditions.go:105] duration metric: took 187.427738ms to run NodePressure ...
	I1101 00:22:34.325056   30437 start.go:228] waiting for startup goroutines ...
	I1101 00:22:34.325076   30437 start.go:242] writing updated cluster config ...
	I1101 00:22:34.325347   30437 ssh_runner.go:195] Run: rm -f paused
	I1101 00:22:34.376848   30437 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1101 00:22:34.380007   30437 out.go:177] * Done! kubectl is now configured to use "multinode-600483" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-11-01 00:18:22 UTC, ends at Wed 2023-11-01 00:22:35 UTC. --
	Nov 01 00:22:35 multinode-600483 crio[711]: time="2023-11-01 00:22:35.451525989Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698798155451510613,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=661b2001-c06e-4f51-8ade-f60d576ebede name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 00:22:35 multinode-600483 crio[711]: time="2023-11-01 00:22:35.452289827Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=537a996a-c570-43bd-ac50-e72e64b0a738 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:22:35 multinode-600483 crio[711]: time="2023-11-01 00:22:35.452374777Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=537a996a-c570-43bd-ac50-e72e64b0a738 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:22:35 multinode-600483 crio[711]: time="2023-11-01 00:22:35.452588765Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2a1f251fba977bc8ff5a4b535d645033d02c8487ba1b9327c2e56e7c913d19b7,PodSandboxId:66503d177c69c3c34a9f45f7c5f3a050b41e01ee6b7bbe2f7584f9eac9f3242e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698797969515083982,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a67f136b-7645-4eb9-9568-52e3ab06d66e,},Annotations:map[string]string{io.kubernetes.container.hash: b02dd2ba,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d7531ce7a41477787358c76db8efcd31808e1027f5a85d35cad433d4024c4bc,PodSandboxId:8939ff4555f3eafac54721e32886dffa9f8e10e398aef687437656773391653f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1698797948467493333,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-8pjvd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85bd3938-9131-4eed-b6f7-7a4cd85f2cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 21c8e4ef,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:745fda692faf1ef87860812d635eaa2660b681b835b230a31655312b9ee4a1e8,PodSandboxId:73a59e99487f8b21ce2b5d94e22caa69acde99061c99bd6c8e90669c852f5bfc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698797945799205894,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rpvvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ab0ebb-aa1f-4143-b987-6c1ae065954a,},Annotations:map[string]string{io.kubernetes.container.hash: b7412969,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b312b97d2097f1b35bf54ff3ac20cb78f563bfbceed896b7ebd65c359a47e9f6,PodSandboxId:5b4cd9aad76aaac314d32dadf43b80794c45c15ce442f2b6ca93f2fdc2a5ca46,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1698797940719011751,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l75r4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: abfa8ec3-0565-4927-a07c-9fed1240d270,},Annotations:map[string]string{io.kubernetes.container.hash: 616a3e1f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52467900ce0cde1478acba0f86f681e327d9a8c8a0d22bf34a953f5f86a3ad59,PodSandboxId:66503d177c69c3c34a9f45f7c5f3a050b41e01ee6b7bbe2f7584f9eac9f3242e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698797938304462602,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: a67f136b-7645-4eb9-9568-52e3ab06d66e,},Annotations:map[string]string{io.kubernetes.container.hash: b02dd2ba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a6c80113ca8ea3ff5651655b9969332e7ed18284092fbc4fa1a6e8cacc1e9fc,PodSandboxId:03400cc530bc605da5a426154ef492aea7c91d97e6fba045b9270ff0d14978aa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698797938225805980,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tq28b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9534d8b8-4536-4a0a-8af5-440e6871
a85f,},Annotations:map[string]string{io.kubernetes.container.hash: b785be7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:273e498f903b9531b00702bfa0015fde1b25c49abbecbf2345189172a03ad6bb,PodSandboxId:a7d0d81e557711f31342f8a3c94214063e25d3738a97398b1160cc74dfa93e74,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698797932119036591,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-600483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01c4e8f68a00a3553dcff3388cb56149,},Annota
tions:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:114ad85419df03d41aa4857ade980368ebafbe728db093aed812e9e2ec94bb1e,PodSandboxId:12a1fe932c9bbcc9d82c35af0efc7e3f47f7cde5fab3f8850dc98a393ac27fad,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698797931881167564,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-600483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5629fb0a0414e85632f97c416152ffbb,},Annotations:map[string]string{io.kubernetes.container.hash
: 4ec05eac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d46c4f1bc5d81867b1d303a8860b6b0f2b8ad779bca25468733976a98e66cbd,PodSandboxId:738f714afe62d3074a45770b054543cf0d3e70f8d21e0ed372b4a52aaeaa2500,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698797931376158052,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-600483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2b1fcba8b34b1f65e600fae0bd4374a,},Annotations:map[string]string{io.k
ubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b2c6ec1bf480f19c7eb9dfe923b7da3412cccb0f94dd2432addde95b1649bee,PodSandboxId:f980456bf2a7e929658d64f15e490d18f7566043d2d1b1ffe385f3da0f2cd6dd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698797931278225073,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-600483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99a9cda13526c350638742a7c7b2ba52,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 7bfab165,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=537a996a-c570-43bd-ac50-e72e64b0a738 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:22:35 multinode-600483 crio[711]: time="2023-11-01 00:22:35.501860471Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=92e8f063-313c-4643-adcc-dc745b70eff8 name=/runtime.v1.RuntimeService/Version
	Nov 01 00:22:35 multinode-600483 crio[711]: time="2023-11-01 00:22:35.502011175Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=92e8f063-313c-4643-adcc-dc745b70eff8 name=/runtime.v1.RuntimeService/Version
	Nov 01 00:22:35 multinode-600483 crio[711]: time="2023-11-01 00:22:35.508307623Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d8edce8e-f24a-48ac-9830-5e78141aa0ac name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 00:22:35 multinode-600483 crio[711]: time="2023-11-01 00:22:35.508766429Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698798155508751110,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=d8edce8e-f24a-48ac-9830-5e78141aa0ac name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 00:22:35 multinode-600483 crio[711]: time="2023-11-01 00:22:35.509492948Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e78f1de6-43fb-4813-b7f8-690bbfb8b441 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:22:35 multinode-600483 crio[711]: time="2023-11-01 00:22:35.509572490Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e78f1de6-43fb-4813-b7f8-690bbfb8b441 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:22:35 multinode-600483 crio[711]: time="2023-11-01 00:22:35.509801444Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2a1f251fba977bc8ff5a4b535d645033d02c8487ba1b9327c2e56e7c913d19b7,PodSandboxId:66503d177c69c3c34a9f45f7c5f3a050b41e01ee6b7bbe2f7584f9eac9f3242e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698797969515083982,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a67f136b-7645-4eb9-9568-52e3ab06d66e,},Annotations:map[string]string{io.kubernetes.container.hash: b02dd2ba,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d7531ce7a41477787358c76db8efcd31808e1027f5a85d35cad433d4024c4bc,PodSandboxId:8939ff4555f3eafac54721e32886dffa9f8e10e398aef687437656773391653f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1698797948467493333,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-8pjvd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85bd3938-9131-4eed-b6f7-7a4cd85f2cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 21c8e4ef,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:745fda692faf1ef87860812d635eaa2660b681b835b230a31655312b9ee4a1e8,PodSandboxId:73a59e99487f8b21ce2b5d94e22caa69acde99061c99bd6c8e90669c852f5bfc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698797945799205894,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rpvvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ab0ebb-aa1f-4143-b987-6c1ae065954a,},Annotations:map[string]string{io.kubernetes.container.hash: b7412969,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b312b97d2097f1b35bf54ff3ac20cb78f563bfbceed896b7ebd65c359a47e9f6,PodSandboxId:5b4cd9aad76aaac314d32dadf43b80794c45c15ce442f2b6ca93f2fdc2a5ca46,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1698797940719011751,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l75r4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: abfa8ec3-0565-4927-a07c-9fed1240d270,},Annotations:map[string]string{io.kubernetes.container.hash: 616a3e1f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52467900ce0cde1478acba0f86f681e327d9a8c8a0d22bf34a953f5f86a3ad59,PodSandboxId:66503d177c69c3c34a9f45f7c5f3a050b41e01ee6b7bbe2f7584f9eac9f3242e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698797938304462602,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: a67f136b-7645-4eb9-9568-52e3ab06d66e,},Annotations:map[string]string{io.kubernetes.container.hash: b02dd2ba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a6c80113ca8ea3ff5651655b9969332e7ed18284092fbc4fa1a6e8cacc1e9fc,PodSandboxId:03400cc530bc605da5a426154ef492aea7c91d97e6fba045b9270ff0d14978aa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698797938225805980,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tq28b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9534d8b8-4536-4a0a-8af5-440e6871
a85f,},Annotations:map[string]string{io.kubernetes.container.hash: b785be7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:273e498f903b9531b00702bfa0015fde1b25c49abbecbf2345189172a03ad6bb,PodSandboxId:a7d0d81e557711f31342f8a3c94214063e25d3738a97398b1160cc74dfa93e74,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698797932119036591,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-600483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01c4e8f68a00a3553dcff3388cb56149,},Annota
tions:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:114ad85419df03d41aa4857ade980368ebafbe728db093aed812e9e2ec94bb1e,PodSandboxId:12a1fe932c9bbcc9d82c35af0efc7e3f47f7cde5fab3f8850dc98a393ac27fad,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698797931881167564,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-600483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5629fb0a0414e85632f97c416152ffbb,},Annotations:map[string]string{io.kubernetes.container.hash
: 4ec05eac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d46c4f1bc5d81867b1d303a8860b6b0f2b8ad779bca25468733976a98e66cbd,PodSandboxId:738f714afe62d3074a45770b054543cf0d3e70f8d21e0ed372b4a52aaeaa2500,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698797931376158052,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-600483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2b1fcba8b34b1f65e600fae0bd4374a,},Annotations:map[string]string{io.k
ubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b2c6ec1bf480f19c7eb9dfe923b7da3412cccb0f94dd2432addde95b1649bee,PodSandboxId:f980456bf2a7e929658d64f15e490d18f7566043d2d1b1ffe385f3da0f2cd6dd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698797931278225073,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-600483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99a9cda13526c350638742a7c7b2ba52,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 7bfab165,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e78f1de6-43fb-4813-b7f8-690bbfb8b441 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:22:35 multinode-600483 crio[711]: time="2023-11-01 00:22:35.556842943Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=fcc9d718-008c-42bb-92db-ce21a2172a14 name=/runtime.v1.RuntimeService/Version
	Nov 01 00:22:35 multinode-600483 crio[711]: time="2023-11-01 00:22:35.557041486Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=fcc9d718-008c-42bb-92db-ce21a2172a14 name=/runtime.v1.RuntimeService/Version
	Nov 01 00:22:35 multinode-600483 crio[711]: time="2023-11-01 00:22:35.558579627Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=fd7fbfd1-9bbd-4642-a48f-11f4072e1904 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 00:22:35 multinode-600483 crio[711]: time="2023-11-01 00:22:35.559104815Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698798155559089345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=fd7fbfd1-9bbd-4642-a48f-11f4072e1904 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 00:22:35 multinode-600483 crio[711]: time="2023-11-01 00:22:35.559762809Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5a5b715b-455f-4067-baa4-89326387c221 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:22:35 multinode-600483 crio[711]: time="2023-11-01 00:22:35.559812912Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5a5b715b-455f-4067-baa4-89326387c221 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:22:35 multinode-600483 crio[711]: time="2023-11-01 00:22:35.560122871Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2a1f251fba977bc8ff5a4b535d645033d02c8487ba1b9327c2e56e7c913d19b7,PodSandboxId:66503d177c69c3c34a9f45f7c5f3a050b41e01ee6b7bbe2f7584f9eac9f3242e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698797969515083982,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a67f136b-7645-4eb9-9568-52e3ab06d66e,},Annotations:map[string]string{io.kubernetes.container.hash: b02dd2ba,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d7531ce7a41477787358c76db8efcd31808e1027f5a85d35cad433d4024c4bc,PodSandboxId:8939ff4555f3eafac54721e32886dffa9f8e10e398aef687437656773391653f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1698797948467493333,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-8pjvd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85bd3938-9131-4eed-b6f7-7a4cd85f2cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 21c8e4ef,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:745fda692faf1ef87860812d635eaa2660b681b835b230a31655312b9ee4a1e8,PodSandboxId:73a59e99487f8b21ce2b5d94e22caa69acde99061c99bd6c8e90669c852f5bfc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698797945799205894,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rpvvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ab0ebb-aa1f-4143-b987-6c1ae065954a,},Annotations:map[string]string{io.kubernetes.container.hash: b7412969,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b312b97d2097f1b35bf54ff3ac20cb78f563bfbceed896b7ebd65c359a47e9f6,PodSandboxId:5b4cd9aad76aaac314d32dadf43b80794c45c15ce442f2b6ca93f2fdc2a5ca46,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1698797940719011751,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l75r4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: abfa8ec3-0565-4927-a07c-9fed1240d270,},Annotations:map[string]string{io.kubernetes.container.hash: 616a3e1f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52467900ce0cde1478acba0f86f681e327d9a8c8a0d22bf34a953f5f86a3ad59,PodSandboxId:66503d177c69c3c34a9f45f7c5f3a050b41e01ee6b7bbe2f7584f9eac9f3242e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698797938304462602,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: a67f136b-7645-4eb9-9568-52e3ab06d66e,},Annotations:map[string]string{io.kubernetes.container.hash: b02dd2ba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a6c80113ca8ea3ff5651655b9969332e7ed18284092fbc4fa1a6e8cacc1e9fc,PodSandboxId:03400cc530bc605da5a426154ef492aea7c91d97e6fba045b9270ff0d14978aa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698797938225805980,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tq28b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9534d8b8-4536-4a0a-8af5-440e6871
a85f,},Annotations:map[string]string{io.kubernetes.container.hash: b785be7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:273e498f903b9531b00702bfa0015fde1b25c49abbecbf2345189172a03ad6bb,PodSandboxId:a7d0d81e557711f31342f8a3c94214063e25d3738a97398b1160cc74dfa93e74,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698797932119036591,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-600483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01c4e8f68a00a3553dcff3388cb56149,},Annota
tions:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:114ad85419df03d41aa4857ade980368ebafbe728db093aed812e9e2ec94bb1e,PodSandboxId:12a1fe932c9bbcc9d82c35af0efc7e3f47f7cde5fab3f8850dc98a393ac27fad,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698797931881167564,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-600483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5629fb0a0414e85632f97c416152ffbb,},Annotations:map[string]string{io.kubernetes.container.hash
: 4ec05eac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d46c4f1bc5d81867b1d303a8860b6b0f2b8ad779bca25468733976a98e66cbd,PodSandboxId:738f714afe62d3074a45770b054543cf0d3e70f8d21e0ed372b4a52aaeaa2500,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698797931376158052,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-600483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2b1fcba8b34b1f65e600fae0bd4374a,},Annotations:map[string]string{io.k
ubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b2c6ec1bf480f19c7eb9dfe923b7da3412cccb0f94dd2432addde95b1649bee,PodSandboxId:f980456bf2a7e929658d64f15e490d18f7566043d2d1b1ffe385f3da0f2cd6dd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698797931278225073,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-600483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99a9cda13526c350638742a7c7b2ba52,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 7bfab165,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5a5b715b-455f-4067-baa4-89326387c221 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:22:35 multinode-600483 crio[711]: time="2023-11-01 00:22:35.599428563Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3b8ed32f-15a8-497a-ba20-f06ec55af3d6 name=/runtime.v1.RuntimeService/Version
	Nov 01 00:22:35 multinode-600483 crio[711]: time="2023-11-01 00:22:35.599504437Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3b8ed32f-15a8-497a-ba20-f06ec55af3d6 name=/runtime.v1.RuntimeService/Version
	Nov 01 00:22:35 multinode-600483 crio[711]: time="2023-11-01 00:22:35.601446299Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d175188f-42c3-4d15-802f-5a47c7a0218d name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 00:22:35 multinode-600483 crio[711]: time="2023-11-01 00:22:35.601850245Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698798155601831700,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=d175188f-42c3-4d15-802f-5a47c7a0218d name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 00:22:35 multinode-600483 crio[711]: time="2023-11-01 00:22:35.602474385Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=615a0a6c-4675-4706-bdd0-95b688701cce name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:22:35 multinode-600483 crio[711]: time="2023-11-01 00:22:35.602540231Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=615a0a6c-4675-4706-bdd0-95b688701cce name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:22:35 multinode-600483 crio[711]: time="2023-11-01 00:22:35.602772562Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2a1f251fba977bc8ff5a4b535d645033d02c8487ba1b9327c2e56e7c913d19b7,PodSandboxId:66503d177c69c3c34a9f45f7c5f3a050b41e01ee6b7bbe2f7584f9eac9f3242e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698797969515083982,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a67f136b-7645-4eb9-9568-52e3ab06d66e,},Annotations:map[string]string{io.kubernetes.container.hash: b02dd2ba,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d7531ce7a41477787358c76db8efcd31808e1027f5a85d35cad433d4024c4bc,PodSandboxId:8939ff4555f3eafac54721e32886dffa9f8e10e398aef687437656773391653f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1698797948467493333,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-8pjvd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85bd3938-9131-4eed-b6f7-7a4cd85f2cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 21c8e4ef,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:745fda692faf1ef87860812d635eaa2660b681b835b230a31655312b9ee4a1e8,PodSandboxId:73a59e99487f8b21ce2b5d94e22caa69acde99061c99bd6c8e90669c852f5bfc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698797945799205894,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rpvvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ab0ebb-aa1f-4143-b987-6c1ae065954a,},Annotations:map[string]string{io.kubernetes.container.hash: b7412969,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b312b97d2097f1b35bf54ff3ac20cb78f563bfbceed896b7ebd65c359a47e9f6,PodSandboxId:5b4cd9aad76aaac314d32dadf43b80794c45c15ce442f2b6ca93f2fdc2a5ca46,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1698797940719011751,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l75r4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: abfa8ec3-0565-4927-a07c-9fed1240d270,},Annotations:map[string]string{io.kubernetes.container.hash: 616a3e1f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52467900ce0cde1478acba0f86f681e327d9a8c8a0d22bf34a953f5f86a3ad59,PodSandboxId:66503d177c69c3c34a9f45f7c5f3a050b41e01ee6b7bbe2f7584f9eac9f3242e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698797938304462602,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: a67f136b-7645-4eb9-9568-52e3ab06d66e,},Annotations:map[string]string{io.kubernetes.container.hash: b02dd2ba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a6c80113ca8ea3ff5651655b9969332e7ed18284092fbc4fa1a6e8cacc1e9fc,PodSandboxId:03400cc530bc605da5a426154ef492aea7c91d97e6fba045b9270ff0d14978aa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698797938225805980,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tq28b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9534d8b8-4536-4a0a-8af5-440e6871
a85f,},Annotations:map[string]string{io.kubernetes.container.hash: b785be7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:273e498f903b9531b00702bfa0015fde1b25c49abbecbf2345189172a03ad6bb,PodSandboxId:a7d0d81e557711f31342f8a3c94214063e25d3738a97398b1160cc74dfa93e74,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698797932119036591,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-600483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01c4e8f68a00a3553dcff3388cb56149,},Annota
tions:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:114ad85419df03d41aa4857ade980368ebafbe728db093aed812e9e2ec94bb1e,PodSandboxId:12a1fe932c9bbcc9d82c35af0efc7e3f47f7cde5fab3f8850dc98a393ac27fad,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698797931881167564,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-600483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5629fb0a0414e85632f97c416152ffbb,},Annotations:map[string]string{io.kubernetes.container.hash
: 4ec05eac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d46c4f1bc5d81867b1d303a8860b6b0f2b8ad779bca25468733976a98e66cbd,PodSandboxId:738f714afe62d3074a45770b054543cf0d3e70f8d21e0ed372b4a52aaeaa2500,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698797931376158052,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-600483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2b1fcba8b34b1f65e600fae0bd4374a,},Annotations:map[string]string{io.k
ubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b2c6ec1bf480f19c7eb9dfe923b7da3412cccb0f94dd2432addde95b1649bee,PodSandboxId:f980456bf2a7e929658d64f15e490d18f7566043d2d1b1ffe385f3da0f2cd6dd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698797931278225073,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-600483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99a9cda13526c350638742a7c7b2ba52,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 7bfab165,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=615a0a6c-4675-4706-bdd0-95b688701cce name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2a1f251fba977       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       2                   66503d177c69c       storage-provisioner
	2d7531ce7a414       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   8939ff4555f3e       busybox-5bc68d56bd-8pjvd
	745fda692faf1       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   73a59e99487f8       coredns-5dd5756b68-rpvvn
	b312b97d2097f       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      3 minutes ago       Running             kindnet-cni               1                   5b4cd9aad76aa       kindnet-l75r4
	52467900ce0cd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       1                   66503d177c69c       storage-provisioner
	9a6c80113ca8e       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                      3 minutes ago       Running             kube-proxy                1                   03400cc530bc6       kube-proxy-tq28b
	273e498f903b9       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                      3 minutes ago       Running             kube-scheduler            1                   a7d0d81e55771       kube-scheduler-multinode-600483
	114ad85419df0       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   12a1fe932c9bb       etcd-multinode-600483
	8d46c4f1bc5d8       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                      3 minutes ago       Running             kube-controller-manager   1                   738f714afe62d       kube-controller-manager-multinode-600483
	8b2c6ec1bf480       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                      3 minutes ago       Running             kube-apiserver            1                   f980456bf2a7e       kube-apiserver-multinode-600483
	
	* 
	* ==> coredns [745fda692faf1ef87860812d635eaa2660b681b835b230a31655312b9ee4a1e8] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35658 - 34572 "HINFO IN 6426782522065429.3940567887366541018. udp 54 false 512" NXDOMAIN qr,rd,ra 54 0.022076077s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-600483
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-600483
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9
	                    minikube.k8s.io/name=multinode-600483
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_01T00_08_31_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Nov 2023 00:08:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-600483
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Nov 2023 00:22:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Nov 2023 00:19:27 +0000   Wed, 01 Nov 2023 00:08:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Nov 2023 00:19:27 +0000   Wed, 01 Nov 2023 00:08:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Nov 2023 00:19:27 +0000   Wed, 01 Nov 2023 00:08:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Nov 2023 00:19:27 +0000   Wed, 01 Nov 2023 00:19:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.130
	  Hostname:    multinode-600483
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 c90a2c78054c41b8a38c897c10fb049f
	  System UUID:                c90a2c78-054c-41b8-a38c-897c10fb049f
	  Boot ID:                    7c60df61-6761-4f13-959f-6adaf3017550
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-8pjvd                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-5dd5756b68-rpvvn                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-600483                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-l75r4                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-600483             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-600483    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-tq28b                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-multinode-600483             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 3m37s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node multinode-600483 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node multinode-600483 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node multinode-600483 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           13m                    node-controller  Node multinode-600483 event: Registered Node multinode-600483 in Controller
	  Normal  NodeReady                13m                    kubelet          Node multinode-600483 status is now: NodeReady
	  Normal  Starting                 3m45s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m45s (x8 over 3m45s)  kubelet          Node multinode-600483 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m45s (x8 over 3m45s)  kubelet          Node multinode-600483 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m45s (x7 over 3m45s)  kubelet          Node multinode-600483 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m26s                  node-controller  Node multinode-600483 event: Registered Node multinode-600483 in Controller
	
	
	Name:               multinode-600483-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-600483-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Nov 2023 00:20:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-600483-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Nov 2023 00:22:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Nov 2023 00:20:51 +0000   Wed, 01 Nov 2023 00:20:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Nov 2023 00:20:51 +0000   Wed, 01 Nov 2023 00:20:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Nov 2023 00:20:51 +0000   Wed, 01 Nov 2023 00:20:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Nov 2023 00:20:51 +0000   Wed, 01 Nov 2023 00:20:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.109
	  Hostname:    multinode-600483-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 f78e4f3cc00e410291c60e98e0c3e140
	  System UUID:                f78e4f3c-c00e-4102-91c6-0e98e0c3e140
	  Boot ID:                    6c188077-8669-4787-ac18-4d43e70351fa
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-zccxq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-d4f6q               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-7kvtf            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From        Message
	  ----     ------                   ----                   ----        -------
	  Normal   Starting                 13m                    kube-proxy  
	  Normal   Starting                 106s                   kube-proxy  
	  Normal   NodeHasSufficientMemory  13m (x5 over 13m)      kubelet     Node multinode-600483-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x5 over 13m)      kubelet     Node multinode-600483-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x5 over 13m)      kubelet     Node multinode-600483-m02 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             2m58s                  kubelet     Node multinode-600483-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m14s (x2 over 3m14s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotSchedulable       106s                   kubelet     Node multinode-600483-m02 status is now: NodeNotSchedulable
	  Normal   NodeReady                106s (x2 over 13m)     kubelet     Node multinode-600483-m02 status is now: NodeReady
	  Normal   Starting                 105s                   kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  105s (x2 over 105s)    kubelet     Node multinode-600483-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    105s (x2 over 105s)    kubelet     Node multinode-600483-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     105s (x2 over 105s)    kubelet     Node multinode-600483-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  105s                   kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                104s                   kubelet     Node multinode-600483-m02 status is now: NodeReady
	
	
	Name:               multinode-600483-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-600483-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Nov 2023 00:22:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-600483-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Nov 2023 00:22:31 +0000   Wed, 01 Nov 2023 00:22:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Nov 2023 00:22:31 +0000   Wed, 01 Nov 2023 00:22:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Nov 2023 00:22:31 +0000   Wed, 01 Nov 2023 00:22:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Nov 2023 00:22:31 +0000   Wed, 01 Nov 2023 00:22:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.2
	  Hostname:    multinode-600483-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 efe84be21d394e26b02f8fdbf38c8124
	  System UUID:                efe84be2-1d39-4e26-b02f-8fdbf38c8124
	  Boot ID:                    69126b26-e67e-4d3b-849c-56ba93ba8151
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-nsjs7    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	  kube-system                 kindnet-ldrkn               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-84g2n            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From        Message
	  ----     ------                   ----               ----        -------
	  Normal   Starting                 11m                kube-proxy  
	  Normal   Starting                 12m                kube-proxy  
	  Normal   Starting                 3s                 kube-proxy  
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet     Node multinode-600483-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)  kubelet     Node multinode-600483-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)  kubelet     Node multinode-600483-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                12m                kubelet     Node multinode-600483-m03 status is now: NodeReady
	  Normal   Starting                 11m                kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)  kubelet     Node multinode-600483-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  11m                kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)  kubelet     Node multinode-600483-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)  kubelet     Node multinode-600483-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                11m                kubelet     Node multinode-600483-m03 status is now: NodeReady
	  Normal   NodeNotReady             65s                kubelet     Node multinode-600483-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        36s (x2 over 96s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 4s                 kubelet     Starting kubelet.
	  Normal   NodeHasNoDiskPressure    4s (x2 over 4s)    kubelet     Node multinode-600483-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4s (x2 over 4s)    kubelet     Node multinode-600483-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  4s                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                4s                 kubelet     Node multinode-600483-m03 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  4s (x2 over 4s)    kubelet     Node multinode-600483-m03 status is now: NodeHasSufficientMemory
	
	* 
	* ==> dmesg <==
	* [Nov 1 00:18] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.064723] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.330951] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.843756] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.139260] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.428987] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.930298] systemd-fstab-generator[637]: Ignoring "noauto" for root device
	[  +0.111427] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.148642] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.116606] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.220254] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[ +16.974898] systemd-fstab-generator[911]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [114ad85419df03d41aa4857ade980368ebafbe728db093aed812e9e2ec94bb1e] <==
	* {"level":"info","ts":"2023-11-01T00:18:53.587374Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-01T00:18:53.587385Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-01T00:18:53.58783Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3bfdfb8084d9036b switched to configuration voters=(4322887746748744555)"}
	{"level":"info","ts":"2023-11-01T00:18:53.58791Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b31a7968a7efeeee","local-member-id":"3bfdfb8084d9036b","added-peer-id":"3bfdfb8084d9036b","added-peer-peer-urls":["https://192.168.39.130:2380"]}
	{"level":"info","ts":"2023-11-01T00:18:53.588115Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b31a7968a7efeeee","local-member-id":"3bfdfb8084d9036b","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T00:18:53.588173Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T00:18:53.592924Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-01T00:18:53.593252Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"3bfdfb8084d9036b","initial-advertise-peer-urls":["https://192.168.39.130:2380"],"listen-peer-urls":["https://192.168.39.130:2380"],"advertise-client-urls":["https://192.168.39.130:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.130:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-01T00:18:53.593308Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-01T00:18:53.593418Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.130:2380"}
	{"level":"info","ts":"2023-11-01T00:18:53.593443Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.130:2380"}
	{"level":"info","ts":"2023-11-01T00:18:55.33248Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3bfdfb8084d9036b is starting a new election at term 2"}
	{"level":"info","ts":"2023-11-01T00:18:55.332622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3bfdfb8084d9036b became pre-candidate at term 2"}
	{"level":"info","ts":"2023-11-01T00:18:55.332667Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3bfdfb8084d9036b received MsgPreVoteResp from 3bfdfb8084d9036b at term 2"}
	{"level":"info","ts":"2023-11-01T00:18:55.332713Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3bfdfb8084d9036b became candidate at term 3"}
	{"level":"info","ts":"2023-11-01T00:18:55.332742Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3bfdfb8084d9036b received MsgVoteResp from 3bfdfb8084d9036b at term 3"}
	{"level":"info","ts":"2023-11-01T00:18:55.332799Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3bfdfb8084d9036b became leader at term 3"}
	{"level":"info","ts":"2023-11-01T00:18:55.332829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3bfdfb8084d9036b elected leader 3bfdfb8084d9036b at term 3"}
	{"level":"info","ts":"2023-11-01T00:18:55.335412Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-01T00:18:55.336103Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"3bfdfb8084d9036b","local-member-attributes":"{Name:multinode-600483 ClientURLs:[https://192.168.39.130:2379]}","request-path":"/0/members/3bfdfb8084d9036b/attributes","cluster-id":"b31a7968a7efeeee","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-01T00:18:55.336319Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-01T00:18:55.33714Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.130:2379"}
	{"level":"info","ts":"2023-11-01T00:18:55.337378Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-01T00:18:55.337567Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-01T00:18:55.337599Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  00:22:36 up 4 min,  0 users,  load average: 0.16, 0.14, 0.07
	Linux multinode-600483 5.10.57 #1 SMP Tue Oct 31 22:14:31 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [b312b97d2097f1b35bf54ff3ac20cb78f563bfbceed896b7ebd65c359a47e9f6] <==
	* I1101 00:22:02.358485       1 main.go:223] Handling node with IPs: map[192.168.39.130:{}]
	I1101 00:22:02.358543       1 main.go:227] handling current node
	I1101 00:22:02.358639       1 main.go:223] Handling node with IPs: map[192.168.39.109:{}]
	I1101 00:22:02.358646       1 main.go:250] Node multinode-600483-m02 has CIDR [10.244.1.0/24] 
	I1101 00:22:02.358909       1 main.go:223] Handling node with IPs: map[192.168.39.2:{}]
	I1101 00:22:02.359011       1 main.go:250] Node multinode-600483-m03 has CIDR [10.244.3.0/24] 
	I1101 00:22:12.373340       1 main.go:223] Handling node with IPs: map[192.168.39.130:{}]
	I1101 00:22:12.373399       1 main.go:227] handling current node
	I1101 00:22:12.373427       1 main.go:223] Handling node with IPs: map[192.168.39.109:{}]
	I1101 00:22:12.373434       1 main.go:250] Node multinode-600483-m02 has CIDR [10.244.1.0/24] 
	I1101 00:22:12.373555       1 main.go:223] Handling node with IPs: map[192.168.39.2:{}]
	I1101 00:22:12.373561       1 main.go:250] Node multinode-600483-m03 has CIDR [10.244.3.0/24] 
	I1101 00:22:22.384930       1 main.go:223] Handling node with IPs: map[192.168.39.130:{}]
	I1101 00:22:22.385117       1 main.go:227] handling current node
	I1101 00:22:22.385151       1 main.go:223] Handling node with IPs: map[192.168.39.109:{}]
	I1101 00:22:22.385252       1 main.go:250] Node multinode-600483-m02 has CIDR [10.244.1.0/24] 
	I1101 00:22:22.385581       1 main.go:223] Handling node with IPs: map[192.168.39.2:{}]
	I1101 00:22:22.385615       1 main.go:250] Node multinode-600483-m03 has CIDR [10.244.3.0/24] 
	I1101 00:22:32.391634       1 main.go:223] Handling node with IPs: map[192.168.39.130:{}]
	I1101 00:22:32.391695       1 main.go:227] handling current node
	I1101 00:22:32.391707       1 main.go:223] Handling node with IPs: map[192.168.39.109:{}]
	I1101 00:22:32.391714       1 main.go:250] Node multinode-600483-m02 has CIDR [10.244.1.0/24] 
	I1101 00:22:32.391881       1 main.go:223] Handling node with IPs: map[192.168.39.2:{}]
	I1101 00:22:32.391910       1 main.go:250] Node multinode-600483-m03 has CIDR [10.244.2.0/24] 
	I1101 00:22:32.392034       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.2 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [8b2c6ec1bf480f19c7eb9dfe923b7da3412cccb0f94dd2432addde95b1649bee] <==
	* I1101 00:18:56.692128       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1101 00:18:56.703331       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1101 00:18:56.703404       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1101 00:18:56.691803       1 controller.go:134] Starting OpenAPI controller
	I1101 00:18:56.848831       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1101 00:18:56.898513       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1101 00:18:56.898674       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 00:18:56.899616       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1101 00:18:56.899889       1 shared_informer.go:318] Caches are synced for configmaps
	I1101 00:18:56.901925       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1101 00:18:56.901943       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1101 00:18:56.905681       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1101 00:18:56.906649       1 aggregator.go:166] initial CRD sync complete...
	I1101 00:18:56.906704       1 autoregister_controller.go:141] Starting autoregister controller
	I1101 00:18:56.906728       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 00:18:56.906751       1 cache.go:39] Caches are synced for autoregister controller
	I1101 00:18:56.917842       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	E1101 00:18:56.942490       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 00:18:57.697071       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 00:18:59.630238       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1101 00:18:59.773939       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1101 00:18:59.790296       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1101 00:18:59.874445       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 00:18:59.882371       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 00:19:47.070607       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [8d46c4f1bc5d81867b1d303a8860b6b0f2b8ad779bca25468733976a98e66cbd] <==
	* I1101 00:20:50.976840       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-600483-m03"
	I1101 00:20:50.977606       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="50.508µs"
	I1101 00:20:50.978728       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-6jjms" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-6jjms"
	I1101 00:20:50.978813       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-600483-m02\" does not exist"
	I1101 00:20:50.996097       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-600483-m02" podCIDRs=["10.244.1.0/24"]
	I1101 00:20:51.052208       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-600483-m02"
	I1101 00:20:51.877209       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="127.881µs"
	I1101 00:21:03.157309       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="66.195µs"
	I1101 00:21:03.729696       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="49.26µs"
	I1101 00:21:03.734660       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="78.683µs"
	I1101 00:21:30.035385       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-600483-m02"
	I1101 00:22:27.481120       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-zccxq"
	I1101 00:22:27.497266       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="30.669242ms"
	I1101 00:22:27.521333       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="23.97725ms"
	I1101 00:22:27.521944       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="55.583µs"
	I1101 00:22:27.522639       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="71.154µs"
	I1101 00:22:28.991615       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="15.372121ms"
	I1101 00:22:28.991896       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="106.933µs"
	I1101 00:22:30.500120       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-600483-m02"
	I1101 00:22:31.170185       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-600483-m03\" does not exist"
	I1101 00:22:31.170380       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-nsjs7" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-nsjs7"
	I1101 00:22:31.170461       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-600483-m02"
	I1101 00:22:31.188533       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-600483-m03" podCIDRs=["10.244.2.0/24"]
	I1101 00:22:31.212286       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-600483-m03"
	I1101 00:22:32.107406       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="151.071µs"
	
	* 
	* ==> kube-proxy [9a6c80113ca8ea3ff5651655b9969332e7ed18284092fbc4fa1a6e8cacc1e9fc] <==
	* I1101 00:18:58.581426       1 server_others.go:69] "Using iptables proxy"
	I1101 00:18:58.602380       1 node.go:141] Successfully retrieved node IP: 192.168.39.130
	I1101 00:18:58.739220       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1101 00:18:58.739289       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 00:18:58.741798       1 server_others.go:152] "Using iptables Proxier"
	I1101 00:18:58.741884       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 00:18:58.742188       1 server.go:846] "Version info" version="v1.28.3"
	I1101 00:18:58.742422       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 00:18:58.743393       1 config.go:188] "Starting service config controller"
	I1101 00:18:58.743484       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 00:18:58.743583       1 config.go:97] "Starting endpoint slice config controller"
	I1101 00:18:58.743632       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 00:18:58.744220       1 config.go:315] "Starting node config controller"
	I1101 00:18:58.745813       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 00:18:58.846068       1 shared_informer.go:318] Caches are synced for node config
	I1101 00:18:58.853110       1 shared_informer.go:318] Caches are synced for service config
	I1101 00:18:58.853140       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [273e498f903b9531b00702bfa0015fde1b25c49abbecbf2345189172a03ad6bb] <==
	* I1101 00:18:53.806234       1 serving.go:348] Generated self-signed cert in-memory
	W1101 00:18:56.814928       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 00:18:56.815085       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 00:18:56.815116       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 00:18:56.815140       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 00:18:56.850349       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1101 00:18:56.850423       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 00:18:56.856229       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1101 00:18:56.856439       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 00:18:56.856480       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 00:18:56.856513       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1101 00:18:56.958647       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-11-01 00:18:22 UTC, ends at Wed 2023-11-01 00:22:36 UTC. --
	Nov 01 00:18:58 multinode-600483 kubelet[917]: E1101 00:18:58.931341     917 projected.go:198] Error preparing data for projected volume kube-api-access-2ndz4 for pod default/busybox-5bc68d56bd-8pjvd: object "default"/"kube-root-ca.crt" not registered
	Nov 01 00:18:58 multinode-600483 kubelet[917]: E1101 00:18:58.931389     917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/85bd3938-9131-4eed-b6f7-7a4cd85f2cb9-kube-api-access-2ndz4 podName:85bd3938-9131-4eed-b6f7-7a4cd85f2cb9 nodeName:}" failed. No retries permitted until 2023-11-01 00:19:00.931374592 +0000 UTC m=+10.916018717 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2ndz4" (UniqueName: "kubernetes.io/projected/85bd3938-9131-4eed-b6f7-7a4cd85f2cb9-kube-api-access-2ndz4") pod "busybox-5bc68d56bd-8pjvd" (UID: "85bd3938-9131-4eed-b6f7-7a4cd85f2cb9") : object "default"/"kube-root-ca.crt" not registered
	Nov 01 00:18:59 multinode-600483 kubelet[917]: E1101 00:18:59.312600     917 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-8pjvd" podUID="85bd3938-9131-4eed-b6f7-7a4cd85f2cb9"
	Nov 01 00:18:59 multinode-600483 kubelet[917]: E1101 00:18:59.313247     917 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-rpvvn" podUID="d8ab0ebb-aa1f-4143-b987-6c1ae065954a"
	Nov 01 00:19:00 multinode-600483 kubelet[917]: E1101 00:19:00.850589     917 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 01 00:19:00 multinode-600483 kubelet[917]: E1101 00:19:00.850653     917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d8ab0ebb-aa1f-4143-b987-6c1ae065954a-config-volume podName:d8ab0ebb-aa1f-4143-b987-6c1ae065954a nodeName:}" failed. No retries permitted until 2023-11-01 00:19:04.850639816 +0000 UTC m=+14.835283939 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d8ab0ebb-aa1f-4143-b987-6c1ae065954a-config-volume") pod "coredns-5dd5756b68-rpvvn" (UID: "d8ab0ebb-aa1f-4143-b987-6c1ae065954a") : object "kube-system"/"coredns" not registered
	Nov 01 00:19:00 multinode-600483 kubelet[917]: E1101 00:19:00.951690     917 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Nov 01 00:19:00 multinode-600483 kubelet[917]: E1101 00:19:00.951730     917 projected.go:198] Error preparing data for projected volume kube-api-access-2ndz4 for pod default/busybox-5bc68d56bd-8pjvd: object "default"/"kube-root-ca.crt" not registered
	Nov 01 00:19:00 multinode-600483 kubelet[917]: E1101 00:19:00.951790     917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/85bd3938-9131-4eed-b6f7-7a4cd85f2cb9-kube-api-access-2ndz4 podName:85bd3938-9131-4eed-b6f7-7a4cd85f2cb9 nodeName:}" failed. No retries permitted until 2023-11-01 00:19:04.951773565 +0000 UTC m=+14.936417688 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2ndz4" (UniqueName: "kubernetes.io/projected/85bd3938-9131-4eed-b6f7-7a4cd85f2cb9-kube-api-access-2ndz4") pod "busybox-5bc68d56bd-8pjvd" (UID: "85bd3938-9131-4eed-b6f7-7a4cd85f2cb9") : object "default"/"kube-root-ca.crt" not registered
	Nov 01 00:19:01 multinode-600483 kubelet[917]: E1101 00:19:01.313084     917 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-8pjvd" podUID="85bd3938-9131-4eed-b6f7-7a4cd85f2cb9"
	Nov 01 00:19:01 multinode-600483 kubelet[917]: E1101 00:19:01.313308     917 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-rpvvn" podUID="d8ab0ebb-aa1f-4143-b987-6c1ae065954a"
	Nov 01 00:19:02 multinode-600483 kubelet[917]: I1101 00:19:02.736064     917 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 01 00:19:29 multinode-600483 kubelet[917]: I1101 00:19:29.480218     917 scope.go:117] "RemoveContainer" containerID="52467900ce0cde1478acba0f86f681e327d9a8c8a0d22bf34a953f5f86a3ad59"
	Nov 01 00:19:50 multinode-600483 kubelet[917]: E1101 00:19:50.330436     917 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 01 00:19:50 multinode-600483 kubelet[917]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 01 00:19:50 multinode-600483 kubelet[917]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 01 00:19:50 multinode-600483 kubelet[917]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 01 00:20:50 multinode-600483 kubelet[917]: E1101 00:20:50.330495     917 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 01 00:20:50 multinode-600483 kubelet[917]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 01 00:20:50 multinode-600483 kubelet[917]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 01 00:20:50 multinode-600483 kubelet[917]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 01 00:21:50 multinode-600483 kubelet[917]: E1101 00:21:50.330410     917 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 01 00:21:50 multinode-600483 kubelet[917]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 01 00:21:50 multinode-600483 kubelet[917]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 01 00:21:50 multinode-600483 kubelet[917]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-600483 -n multinode-600483
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-600483 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (685.63s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (142.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 stop
E1101 00:23:02.507898   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt: no such file or directory
multinode_test.go:314: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-600483 stop: exit status 82 (2m0.843022573s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-600483"  ...
	* Stopping node "multinode-600483"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:316: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-600483 stop": exit status 82
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-600483 status: exit status 3 (18.840035087s)

                                                
                                                
-- stdout --
	multinode-600483
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-600483-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 00:24:58.412337   33215 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.130:22: connect: no route to host
	E1101 00:24:58.412375   33215 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.130:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-600483 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-600483 -n multinode-600483
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-600483 -n multinode-600483: exit status 3 (3.194401323s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 00:25:01.772347   33309 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.130:22: connect: no route to host
	E1101 00:25:01.772369   33309 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.130:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-600483" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (142.88s)

                                                
                                    
x
+
TestPreload (281.75s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-270216 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E1101 00:35:19.052271   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/functional-736766/client.crt: no such file or directory
E1101 00:35:35.091229   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-270216 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m18.006887038s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-270216 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-270216 image pull gcr.io/k8s-minikube/busybox: (2.657444117s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-270216
E1101 00:37:16.006163   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/functional-736766/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-270216: exit status 82 (2m1.527375891s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-270216"  ...
	* Stopping node "test-preload-270216"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-270216 failed: exit status 82
panic.go:523: *** TestPreload FAILED at 2023-11-01 00:37:43.125292692 +0000 UTC m=+3236.689874686
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-270216 -n test-preload-270216
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-270216 -n test-preload-270216: exit status 3 (18.651277827s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 00:38:01.772300   36224 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.36:22: connect: no route to host
	E1101 00:38:01.772327   36224 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.36:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-270216" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-270216" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-270216
E1101 00:38:02.504928   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt: no such file or directory
--- FAIL: TestPreload (281.75s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (205.48s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.2080752761.exe start -p running-upgrade-411881 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E1101 00:40:35.091629   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.2080752761.exe start -p running-upgrade-411881 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m24.025803747s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-411881 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1101 00:43:02.504497   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt: no such file or directory
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-411881 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (57.640088629s)

                                                
                                                
-- stdout --
	* [running-upgrade-411881] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17486
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17486-7305/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7305/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the kvm2 driver based on existing profile
	* Starting control plane node running-upgrade-411881 in cluster running-upgrade-411881
	* Updating the running kvm2 "running-upgrade-411881" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 00:42:25.385539   39332 out.go:296] Setting OutFile to fd 1 ...
	I1101 00:42:25.385783   39332 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:42:25.385793   39332 out.go:309] Setting ErrFile to fd 2...
	I1101 00:42:25.385801   39332 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:42:25.386035   39332 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7305/.minikube/bin
	I1101 00:42:25.386604   39332 out.go:303] Setting JSON to false
	I1101 00:42:25.387542   39332 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5091,"bootTime":1698794255,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 00:42:25.387605   39332 start.go:138] virtualization: kvm guest
	I1101 00:42:25.390162   39332 out.go:177] * [running-upgrade-411881] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1101 00:42:25.391851   39332 out.go:177]   - MINIKUBE_LOCATION=17486
	I1101 00:42:25.391909   39332 notify.go:220] Checking for updates...
	I1101 00:42:25.393525   39332 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 00:42:25.395411   39332 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 00:42:25.397055   39332 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7305/.minikube
	I1101 00:42:25.398643   39332 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 00:42:25.400144   39332 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 00:42:25.402075   39332 config.go:182] Loaded profile config "running-upgrade-411881": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1101 00:42:25.402097   39332 start_flags.go:694] config upgrade: Driver=kvm2
	I1101 00:42:25.402124   39332 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458
	I1101 00:42:25.402234   39332 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/running-upgrade-411881/config.json ...
	I1101 00:42:25.403089   39332 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1101 00:42:25.403171   39332 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:42:25.418038   39332 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46461
	I1101 00:42:25.418495   39332 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:42:25.419095   39332 main.go:141] libmachine: Using API Version  1
	I1101 00:42:25.419118   39332 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:42:25.419508   39332 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:42:25.419701   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .DriverName
	I1101 00:42:25.422144   39332 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1101 00:42:25.423852   39332 driver.go:378] Setting default libvirt URI to qemu:///system
	I1101 00:42:25.424232   39332 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1101 00:42:25.424278   39332 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:42:25.438923   39332 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33939
	I1101 00:42:25.439402   39332 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:42:25.439873   39332 main.go:141] libmachine: Using API Version  1
	I1101 00:42:25.439889   39332 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:42:25.440249   39332 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:42:25.440427   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .DriverName
	I1101 00:42:25.474986   39332 out.go:177] * Using the kvm2 driver based on existing profile
	I1101 00:42:25.476197   39332 start.go:298] selected driver: kvm2
	I1101 00:42:25.476213   39332 start.go:902] validating driver "kvm2" against &{Name:running-upgrade-411881 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.216 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1101 00:42:25.476365   39332 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 00:42:25.477353   39332 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:42:25.477431   39332 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17486-7305/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1101 00:42:25.492649   39332 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1101 00:42:25.493033   39332 cni.go:84] Creating CNI manager for ""
	I1101 00:42:25.493054   39332 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1101 00:42:25.493063   39332 start_flags.go:323] config:
	{Name:running-upgrade-411881 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.216 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1101 00:42:25.493271   39332 iso.go:125] acquiring lock: {Name:mk1f649ca0b7c1ae293cd66cb85f9eeda028b20b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:42:25.496174   39332 out.go:177] * Starting control plane node running-upgrade-411881 in cluster running-upgrade-411881
	I1101 00:42:25.497462   39332 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W1101 00:42:25.597927   39332 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1101 00:42:25.598100   39332 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/running-upgrade-411881/config.json ...
	I1101 00:42:25.598179   39332 cache.go:107] acquiring lock: {Name:mk75934dc4db90e7695096c67de431e1468f524d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:42:25.598201   39332 cache.go:107] acquiring lock: {Name:mka5f0ab2da1dc5a638693142ce88e91287c652f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:42:25.598215   39332 cache.go:107] acquiring lock: {Name:mk6ce22992620c1cc00db2dc78a127e8926b2ed9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:42:25.598314   39332 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
	I1101 00:42:25.598428   39332 start.go:365] acquiring machines lock for running-upgrade-411881: {Name:mk7aad88408c319111b9be8e59d9593a9e88374b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 00:42:25.598372   39332 cache.go:107] acquiring lock: {Name:mkb653ef43681731ecdec4dbfc14c80af01f8db4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:42:25.598447   39332 cache.go:107] acquiring lock: {Name:mk4772d866cb4e6ee70ab1f42ac7ca7ac27f9e6d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:42:25.598483   39332 cache.go:107] acquiring lock: {Name:mkbdd0c8ea71dacd0e5af31f78fa71949ca05500 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:42:25.598518   39332 cache.go:107] acquiring lock: {Name:mk3c44c5cab07429fd932b6f2ebab452e32c2e00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:42:25.598568   39332 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1101 00:42:25.598581   39332 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1101 00:42:25.598187   39332 cache.go:107] acquiring lock: {Name:mkdb188e30cd90c3d303f9a6e6470b7d1d7bc629 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:42:25.598346   39332 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
	I1101 00:42:25.598659   39332 cache.go:115] /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1101 00:42:25.598675   39332 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
	I1101 00:42:25.598675   39332 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 498.257µs
	I1101 00:42:25.598693   39332 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1101 00:42:25.598773   39332 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
	I1101 00:42:25.598346   39332 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1101 00:42:25.599594   39332 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
	I1101 00:42:25.599605   39332 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1101 00:42:25.599594   39332 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
	I1101 00:42:25.599595   39332 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
	I1101 00:42:25.599598   39332 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1101 00:42:25.599598   39332 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
	I1101 00:42:25.599597   39332 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1101 00:42:25.816418   39332 cache.go:162] opening:  /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1101 00:42:25.825679   39332 cache.go:162] opening:  /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1101 00:42:25.847020   39332 cache.go:162] opening:  /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5
	I1101 00:42:25.854242   39332 cache.go:162] opening:  /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0
	I1101 00:42:25.865314   39332 cache.go:162] opening:  /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0
	I1101 00:42:25.868134   39332 cache.go:162] opening:  /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0
	I1101 00:42:25.869539   39332 cache.go:162] opening:  /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0
	I1101 00:42:25.984076   39332 cache.go:157] /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1101 00:42:25.984108   39332 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 385.892087ms
	I1101 00:42:25.984124   39332 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1101 00:42:26.420540   39332 cache.go:157] /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I1101 00:42:26.420569   39332 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 822.077955ms
	I1101 00:42:26.420584   39332 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I1101 00:42:26.844780   39332 cache.go:157] /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I1101 00:42:26.844809   39332 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 1.246364783s
	I1101 00:42:26.844822   39332 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I1101 00:42:26.958185   39332 cache.go:157] /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I1101 00:42:26.958216   39332 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 1.359809297s
	I1101 00:42:26.958234   39332 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I1101 00:42:26.968927   39332 cache.go:157] /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I1101 00:42:26.968953   39332 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 1.370783295s
	I1101 00:42:26.968964   39332 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I1101 00:42:27.709632   39332 cache.go:157] /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I1101 00:42:27.709665   39332 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 2.111477733s
	I1101 00:42:27.709681   39332 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I1101 00:42:27.721223   39332 cache.go:157] /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1101 00:42:27.721251   39332 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 2.122943375s
	I1101 00:42:27.721268   39332 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1101 00:42:27.721288   39332 cache.go:87] Successfully saved all images to host disk.
	I1101 00:43:19.606320   39332 start.go:369] acquired machines lock for "running-upgrade-411881" in 54.007864567s
	I1101 00:43:19.606382   39332 start.go:96] Skipping create...Using existing machine configuration
	I1101 00:43:19.606389   39332 fix.go:54] fixHost starting: minikube
	I1101 00:43:19.606804   39332 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1101 00:43:19.606846   39332 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:43:19.628130   39332 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33015
	I1101 00:43:19.628555   39332 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:43:19.629084   39332 main.go:141] libmachine: Using API Version  1
	I1101 00:43:19.629113   39332 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:43:19.629458   39332 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:43:19.629650   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .DriverName
	I1101 00:43:19.629799   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetState
	I1101 00:43:19.631503   39332 fix.go:102] recreateIfNeeded on running-upgrade-411881: state=Running err=<nil>
	W1101 00:43:19.631537   39332 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 00:43:19.633839   39332 out.go:177] * Updating the running kvm2 "running-upgrade-411881" VM ...
	I1101 00:43:19.635255   39332 machine.go:88] provisioning docker machine ...
	I1101 00:43:19.635288   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .DriverName
	I1101 00:43:19.635526   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetMachineName
	I1101 00:43:19.635698   39332 buildroot.go:166] provisioning hostname "running-upgrade-411881"
	I1101 00:43:19.635723   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetMachineName
	I1101 00:43:19.635853   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetSSHHostname
	I1101 00:43:19.638765   39332 main.go:141] libmachine: (running-upgrade-411881) DBG | domain running-upgrade-411881 has defined MAC address 52:54:00:b4:b3:37 in network minikube-net
	I1101 00:43:19.639215   39332 main.go:141] libmachine: (running-upgrade-411881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:b3:37", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-01 01:40:38 +0000 UTC Type:0 Mac:52:54:00:b4:b3:37 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:running-upgrade-411881 Clientid:01:52:54:00:b4:b3:37}
	I1101 00:43:19.639267   39332 main.go:141] libmachine: (running-upgrade-411881) DBG | domain running-upgrade-411881 has defined IP address 192.168.50.216 and MAC address 52:54:00:b4:b3:37 in network minikube-net
	I1101 00:43:19.639388   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetSSHPort
	I1101 00:43:19.639597   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetSSHKeyPath
	I1101 00:43:19.639788   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetSSHKeyPath
	I1101 00:43:19.639956   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetSSHUsername
	I1101 00:43:19.640160   39332 main.go:141] libmachine: Using SSH client type: native
	I1101 00:43:19.640477   39332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.216 22 <nil> <nil>}
	I1101 00:43:19.640491   39332 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-411881 && echo "running-upgrade-411881" | sudo tee /etc/hostname
	I1101 00:43:19.781541   39332 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-411881
	
	I1101 00:43:19.781580   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetSSHHostname
	I1101 00:43:19.784896   39332 main.go:141] libmachine: (running-upgrade-411881) DBG | domain running-upgrade-411881 has defined MAC address 52:54:00:b4:b3:37 in network minikube-net
	I1101 00:43:19.785354   39332 main.go:141] libmachine: (running-upgrade-411881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:b3:37", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-01 01:40:38 +0000 UTC Type:0 Mac:52:54:00:b4:b3:37 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:running-upgrade-411881 Clientid:01:52:54:00:b4:b3:37}
	I1101 00:43:19.785395   39332 main.go:141] libmachine: (running-upgrade-411881) DBG | domain running-upgrade-411881 has defined IP address 192.168.50.216 and MAC address 52:54:00:b4:b3:37 in network minikube-net
	I1101 00:43:19.785657   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetSSHPort
	I1101 00:43:19.785843   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetSSHKeyPath
	I1101 00:43:19.786039   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetSSHKeyPath
	I1101 00:43:19.786203   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetSSHUsername
	I1101 00:43:19.786359   39332 main.go:141] libmachine: Using SSH client type: native
	I1101 00:43:19.786826   39332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.216 22 <nil> <nil>}
	I1101 00:43:19.786856   39332 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-411881' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-411881/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-411881' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 00:43:19.924535   39332 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 00:43:19.924571   39332 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7305/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7305/.minikube}
	I1101 00:43:19.924596   39332 buildroot.go:174] setting up certificates
	I1101 00:43:19.924608   39332 provision.go:83] configureAuth start
	I1101 00:43:19.924622   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetMachineName
	I1101 00:43:19.924906   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetIP
	I1101 00:43:19.928054   39332 main.go:141] libmachine: (running-upgrade-411881) DBG | domain running-upgrade-411881 has defined MAC address 52:54:00:b4:b3:37 in network minikube-net
	I1101 00:43:19.928505   39332 main.go:141] libmachine: (running-upgrade-411881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:b3:37", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-01 01:40:38 +0000 UTC Type:0 Mac:52:54:00:b4:b3:37 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:running-upgrade-411881 Clientid:01:52:54:00:b4:b3:37}
	I1101 00:43:19.928535   39332 main.go:141] libmachine: (running-upgrade-411881) DBG | domain running-upgrade-411881 has defined IP address 192.168.50.216 and MAC address 52:54:00:b4:b3:37 in network minikube-net
	I1101 00:43:19.928696   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetSSHHostname
	I1101 00:43:19.931125   39332 main.go:141] libmachine: (running-upgrade-411881) DBG | domain running-upgrade-411881 has defined MAC address 52:54:00:b4:b3:37 in network minikube-net
	I1101 00:43:19.931607   39332 main.go:141] libmachine: (running-upgrade-411881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:b3:37", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-01 01:40:38 +0000 UTC Type:0 Mac:52:54:00:b4:b3:37 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:running-upgrade-411881 Clientid:01:52:54:00:b4:b3:37}
	I1101 00:43:19.931635   39332 main.go:141] libmachine: (running-upgrade-411881) DBG | domain running-upgrade-411881 has defined IP address 192.168.50.216 and MAC address 52:54:00:b4:b3:37 in network minikube-net
	I1101 00:43:19.931763   39332 provision.go:138] copyHostCerts
	I1101 00:43:19.931824   39332 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem, removing ...
	I1101 00:43:19.931838   39332 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 00:43:19.931899   39332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem (1123 bytes)
	I1101 00:43:19.932020   39332 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem, removing ...
	I1101 00:43:19.932026   39332 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 00:43:19.932058   39332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem (1675 bytes)
	I1101 00:43:19.932128   39332 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem, removing ...
	I1101 00:43:19.932134   39332 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 00:43:19.932165   39332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem (1078 bytes)
	I1101 00:43:19.932220   39332 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-411881 san=[192.168.50.216 192.168.50.216 localhost 127.0.0.1 minikube running-upgrade-411881]
	I1101 00:43:20.190515   39332 provision.go:172] copyRemoteCerts
	I1101 00:43:20.190575   39332 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 00:43:20.190598   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetSSHHostname
	I1101 00:43:20.193317   39332 main.go:141] libmachine: (running-upgrade-411881) DBG | domain running-upgrade-411881 has defined MAC address 52:54:00:b4:b3:37 in network minikube-net
	I1101 00:43:20.193903   39332 main.go:141] libmachine: (running-upgrade-411881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:b3:37", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-01 01:40:38 +0000 UTC Type:0 Mac:52:54:00:b4:b3:37 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:running-upgrade-411881 Clientid:01:52:54:00:b4:b3:37}
	I1101 00:43:20.193943   39332 main.go:141] libmachine: (running-upgrade-411881) DBG | domain running-upgrade-411881 has defined IP address 192.168.50.216 and MAC address 52:54:00:b4:b3:37 in network minikube-net
	I1101 00:43:20.194233   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetSSHPort
	I1101 00:43:20.194489   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetSSHKeyPath
	I1101 00:43:20.194705   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetSSHUsername
	I1101 00:43:20.194907   39332 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/running-upgrade-411881/id_rsa Username:docker}
	I1101 00:43:20.279249   39332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1101 00:43:20.299005   39332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 00:43:20.314939   39332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 00:43:20.331652   39332 provision.go:86] duration metric: configureAuth took 407.027601ms
	I1101 00:43:20.331686   39332 buildroot.go:189] setting minikube options for container-runtime
	I1101 00:43:20.331850   39332 config.go:182] Loaded profile config "running-upgrade-411881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1101 00:43:20.331957   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetSSHHostname
	I1101 00:43:20.335151   39332 main.go:141] libmachine: (running-upgrade-411881) DBG | domain running-upgrade-411881 has defined MAC address 52:54:00:b4:b3:37 in network minikube-net
	I1101 00:43:20.335518   39332 main.go:141] libmachine: (running-upgrade-411881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:b3:37", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-01 01:40:38 +0000 UTC Type:0 Mac:52:54:00:b4:b3:37 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:running-upgrade-411881 Clientid:01:52:54:00:b4:b3:37}
	I1101 00:43:20.335562   39332 main.go:141] libmachine: (running-upgrade-411881) DBG | domain running-upgrade-411881 has defined IP address 192.168.50.216 and MAC address 52:54:00:b4:b3:37 in network minikube-net
	I1101 00:43:20.335733   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetSSHPort
	I1101 00:43:20.335971   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetSSHKeyPath
	I1101 00:43:20.336144   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetSSHKeyPath
	I1101 00:43:20.336300   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetSSHUsername
	I1101 00:43:20.336443   39332 main.go:141] libmachine: Using SSH client type: native
	I1101 00:43:20.336963   39332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.216 22 <nil> <nil>}
	I1101 00:43:20.336990   39332 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 00:43:20.922156   39332 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 00:43:20.922183   39332 machine.go:91] provisioned docker machine in 1.286908516s
	I1101 00:43:20.922195   39332 start.go:300] post-start starting for "running-upgrade-411881" (driver="kvm2")
	I1101 00:43:20.922208   39332 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 00:43:20.922236   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .DriverName
	I1101 00:43:20.922570   39332 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 00:43:20.922605   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetSSHHostname
	I1101 00:43:20.925541   39332 main.go:141] libmachine: (running-upgrade-411881) DBG | domain running-upgrade-411881 has defined MAC address 52:54:00:b4:b3:37 in network minikube-net
	I1101 00:43:20.925992   39332 main.go:141] libmachine: (running-upgrade-411881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:b3:37", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-01 01:40:38 +0000 UTC Type:0 Mac:52:54:00:b4:b3:37 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:running-upgrade-411881 Clientid:01:52:54:00:b4:b3:37}
	I1101 00:43:20.926012   39332 main.go:141] libmachine: (running-upgrade-411881) DBG | domain running-upgrade-411881 has defined IP address 192.168.50.216 and MAC address 52:54:00:b4:b3:37 in network minikube-net
	I1101 00:43:20.926224   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetSSHPort
	I1101 00:43:20.926493   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetSSHKeyPath
	I1101 00:43:20.926654   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetSSHUsername
	I1101 00:43:20.926836   39332 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/running-upgrade-411881/id_rsa Username:docker}
	I1101 00:43:21.014718   39332 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 00:43:21.019110   39332 info.go:137] Remote host: Buildroot 2019.02.7
	I1101 00:43:21.019135   39332 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/addons for local assets ...
	I1101 00:43:21.019193   39332 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/files for local assets ...
	I1101 00:43:21.019309   39332 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> 145042.pem in /etc/ssl/certs
	I1101 00:43:21.019472   39332 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 00:43:21.026404   39332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /etc/ssl/certs/145042.pem (1708 bytes)
	I1101 00:43:21.043026   39332 start.go:303] post-start completed in 120.817073ms
	I1101 00:43:21.043052   39332 fix.go:56] fixHost completed within 1.43666108s
	I1101 00:43:21.043071   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetSSHHostname
	I1101 00:43:21.046307   39332 main.go:141] libmachine: (running-upgrade-411881) DBG | domain running-upgrade-411881 has defined MAC address 52:54:00:b4:b3:37 in network minikube-net
	I1101 00:43:21.046612   39332 main.go:141] libmachine: (running-upgrade-411881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:b3:37", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-01 01:40:38 +0000 UTC Type:0 Mac:52:54:00:b4:b3:37 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:running-upgrade-411881 Clientid:01:52:54:00:b4:b3:37}
	I1101 00:43:21.046657   39332 main.go:141] libmachine: (running-upgrade-411881) DBG | domain running-upgrade-411881 has defined IP address 192.168.50.216 and MAC address 52:54:00:b4:b3:37 in network minikube-net
	I1101 00:43:21.046852   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetSSHPort
	I1101 00:43:21.047081   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetSSHKeyPath
	I1101 00:43:21.047278   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetSSHKeyPath
	I1101 00:43:21.047456   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetSSHUsername
	I1101 00:43:21.047645   39332 main.go:141] libmachine: Using SSH client type: native
	I1101 00:43:21.047964   39332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.216 22 <nil> <nil>}
	I1101 00:43:21.047981   39332 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1101 00:43:21.164504   39332 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698799401.161276714
	
	I1101 00:43:21.164545   39332 fix.go:206] guest clock: 1698799401.161276714
	I1101 00:43:21.164556   39332 fix.go:219] Guest: 2023-11-01 00:43:21.161276714 +0000 UTC Remote: 2023-11-01 00:43:21.043055637 +0000 UTC m=+55.707654986 (delta=118.221077ms)
	I1101 00:43:21.164612   39332 fix.go:190] guest clock delta is within tolerance: 118.221077ms
	I1101 00:43:21.164620   39332 start.go:83] releasing machines lock for "running-upgrade-411881", held for 1.558260999s
	I1101 00:43:21.164658   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .DriverName
	I1101 00:43:21.164958   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetIP
	I1101 00:43:21.168093   39332 main.go:141] libmachine: (running-upgrade-411881) DBG | domain running-upgrade-411881 has defined MAC address 52:54:00:b4:b3:37 in network minikube-net
	I1101 00:43:21.168576   39332 main.go:141] libmachine: (running-upgrade-411881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:b3:37", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-01 01:40:38 +0000 UTC Type:0 Mac:52:54:00:b4:b3:37 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:running-upgrade-411881 Clientid:01:52:54:00:b4:b3:37}
	I1101 00:43:21.168610   39332 main.go:141] libmachine: (running-upgrade-411881) DBG | domain running-upgrade-411881 has defined IP address 192.168.50.216 and MAC address 52:54:00:b4:b3:37 in network minikube-net
	I1101 00:43:21.168819   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .DriverName
	I1101 00:43:21.169442   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .DriverName
	I1101 00:43:21.169661   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .DriverName
	I1101 00:43:21.169763   39332 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 00:43:21.169802   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetSSHHostname
	I1101 00:43:21.169881   39332 ssh_runner.go:195] Run: cat /version.json
	I1101 00:43:21.169924   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetSSHHostname
	I1101 00:43:21.172743   39332 main.go:141] libmachine: (running-upgrade-411881) DBG | domain running-upgrade-411881 has defined MAC address 52:54:00:b4:b3:37 in network minikube-net
	I1101 00:43:21.173182   39332 main.go:141] libmachine: (running-upgrade-411881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:b3:37", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-01 01:40:38 +0000 UTC Type:0 Mac:52:54:00:b4:b3:37 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:running-upgrade-411881 Clientid:01:52:54:00:b4:b3:37}
	I1101 00:43:21.173216   39332 main.go:141] libmachine: (running-upgrade-411881) DBG | domain running-upgrade-411881 has defined IP address 192.168.50.216 and MAC address 52:54:00:b4:b3:37 in network minikube-net
	I1101 00:43:21.173240   39332 main.go:141] libmachine: (running-upgrade-411881) DBG | domain running-upgrade-411881 has defined MAC address 52:54:00:b4:b3:37 in network minikube-net
	I1101 00:43:21.173406   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetSSHPort
	I1101 00:43:21.173605   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetSSHKeyPath
	I1101 00:43:21.173672   39332 main.go:141] libmachine: (running-upgrade-411881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:b3:37", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-01 01:40:38 +0000 UTC Type:0 Mac:52:54:00:b4:b3:37 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:running-upgrade-411881 Clientid:01:52:54:00:b4:b3:37}
	I1101 00:43:21.173694   39332 main.go:141] libmachine: (running-upgrade-411881) DBG | domain running-upgrade-411881 has defined IP address 192.168.50.216 and MAC address 52:54:00:b4:b3:37 in network minikube-net
	I1101 00:43:21.173790   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetSSHUsername
	I1101 00:43:21.173874   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetSSHPort
	I1101 00:43:21.173967   39332 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/running-upgrade-411881/id_rsa Username:docker}
	I1101 00:43:21.174031   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetSSHKeyPath
	I1101 00:43:21.174164   39332 main.go:141] libmachine: (running-upgrade-411881) Calling .GetSSHUsername
	I1101 00:43:21.174326   39332 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/running-upgrade-411881/id_rsa Username:docker}
	W1101 00:43:21.298114   39332 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1101 00:43:21.298262   39332 ssh_runner.go:195] Run: systemctl --version
	I1101 00:43:21.303467   39332 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 00:43:21.440897   39332 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 00:43:21.446767   39332 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 00:43:21.446837   39332 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 00:43:21.453442   39332 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 00:43:21.453472   39332 start.go:472] detecting cgroup driver to use...
	I1101 00:43:21.453542   39332 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 00:43:21.465928   39332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 00:43:21.476357   39332 docker.go:204] disabling cri-docker service (if available) ...
	I1101 00:43:21.476415   39332 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 00:43:21.485712   39332 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 00:43:21.493946   39332 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1101 00:43:21.502860   39332 docker.go:214] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1101 00:43:21.502935   39332 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 00:43:21.647461   39332 docker.go:220] disabling docker service ...
	I1101 00:43:21.647540   39332 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 00:43:22.677636   39332 ssh_runner.go:235] Completed: sudo systemctl stop -f docker.socket: (1.030066486s)
	I1101 00:43:22.677707   39332 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 00:43:22.687783   39332 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 00:43:22.804862   39332 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 00:43:22.925806   39332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 00:43:22.937650   39332 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 00:43:22.949713   39332 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1101 00:43:22.949787   39332 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:43:22.960011   39332 out.go:177] 
	W1101 00:43:22.962297   39332 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1101 00:43:22.962325   39332 out.go:239] * 
	* 
	W1101 00:43:22.963204   39332 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 00:43:22.965307   39332 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-411881 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-11-01 00:43:22.988098652 +0000 UTC m=+3576.552680645
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-411881 -n running-upgrade-411881
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-411881 -n running-upgrade-411881: exit status 4 (254.97178ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 00:43:23.209400   42245 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-411881" does not appear in /home/jenkins/minikube-integration/17486-7305/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-411881" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-411881" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-411881
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-411881: (1.657212227s)
--- FAIL: TestRunningBinaryUpgrade (205.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (302.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.3376606330.exe start -p stopped-upgrade-886496 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.3376606330.exe start -p stopped-upgrade-886496 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m18.52547075s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.3376606330.exe -p stopped-upgrade-886496 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.3376606330.exe -p stopped-upgrade-886496 stop: (1m34.840233024s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-886496 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1101 00:47:16.007107   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/functional-736766/client.crt: no such file or directory
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-886496 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (1m8.926804198s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-886496] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17486
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17486-7305/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7305/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the kvm2 driver based on existing profile
	* Starting control plane node stopped-upgrade-886496 in cluster stopped-upgrade-886496
	* Restarting existing kvm2 VM for "stopped-upgrade-886496" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 00:47:13.720207   46328 out.go:296] Setting OutFile to fd 1 ...
	I1101 00:47:13.720358   46328 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:47:13.720368   46328 out.go:309] Setting ErrFile to fd 2...
	I1101 00:47:13.720373   46328 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:47:13.720575   46328 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7305/.minikube/bin
	I1101 00:47:13.721129   46328 out.go:303] Setting JSON to false
	I1101 00:47:13.722188   46328 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5379,"bootTime":1698794255,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 00:47:13.722253   46328 start.go:138] virtualization: kvm guest
	I1101 00:47:13.725104   46328 out.go:177] * [stopped-upgrade-886496] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1101 00:47:13.726745   46328 out.go:177]   - MINIKUBE_LOCATION=17486
	I1101 00:47:13.726758   46328 notify.go:220] Checking for updates...
	I1101 00:47:13.728366   46328 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 00:47:13.729918   46328 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 00:47:13.731389   46328 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7305/.minikube
	I1101 00:47:13.732835   46328 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 00:47:13.734372   46328 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 00:47:13.736316   46328 config.go:182] Loaded profile config "stopped-upgrade-886496": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1101 00:47:13.736343   46328 start_flags.go:694] config upgrade: Driver=kvm2
	I1101 00:47:13.736354   46328 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458
	I1101 00:47:13.736426   46328 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/stopped-upgrade-886496/config.json ...
	I1101 00:47:13.737069   46328 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 00:47:13.737138   46328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:47:13.753713   46328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43163
	I1101 00:47:13.754195   46328 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:47:13.754793   46328 main.go:141] libmachine: Using API Version  1
	I1101 00:47:13.754822   46328 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:47:13.755198   46328 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:47:13.755461   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .DriverName
	I1101 00:47:13.757910   46328 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1101 00:47:13.759555   46328 driver.go:378] Setting default libvirt URI to qemu:///system
	I1101 00:47:13.759870   46328 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 00:47:13.759913   46328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:47:13.775711   46328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36793
	I1101 00:47:13.776191   46328 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:47:13.776720   46328 main.go:141] libmachine: Using API Version  1
	I1101 00:47:13.776741   46328 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:47:13.777075   46328 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:47:13.777300   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .DriverName
	I1101 00:47:13.814609   46328 out.go:177] * Using the kvm2 driver based on existing profile
	I1101 00:47:13.816127   46328 start.go:298] selected driver: kvm2
	I1101 00:47:13.816148   46328 start.go:902] validating driver "kvm2" against &{Name:stopped-upgrade-886496 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.129 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1101 00:47:13.816257   46328 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 00:47:13.816935   46328 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:47:13.817026   46328 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17486-7305/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1101 00:47:13.833947   46328 install.go:137] /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1101 00:47:13.834282   46328 cni.go:84] Creating CNI manager for ""
	I1101 00:47:13.834304   46328 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1101 00:47:13.834314   46328 start_flags.go:323] config:
	{Name:stopped-upgrade-886496 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.129 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1101 00:47:13.834503   46328 iso.go:125] acquiring lock: {Name:mk1f649ca0b7c1ae293cd66cb85f9eeda028b20b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:47:13.836542   46328 out.go:177] * Starting control plane node stopped-upgrade-886496 in cluster stopped-upgrade-886496
	I1101 00:47:13.838421   46328 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W1101 00:47:14.238840   46328 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1101 00:47:14.239045   46328 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/stopped-upgrade-886496/config.json ...
	I1101 00:47:14.239098   46328 cache.go:107] acquiring lock: {Name:mkdb188e30cd90c3d303f9a6e6470b7d1d7bc629 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:47:14.239134   46328 cache.go:107] acquiring lock: {Name:mkb653ef43681731ecdec4dbfc14c80af01f8db4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:47:14.239165   46328 cache.go:107] acquiring lock: {Name:mk75934dc4db90e7695096c67de431e1468f524d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:47:14.239206   46328 cache.go:115] /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1101 00:47:14.239218   46328 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 102.107µs
	I1101 00:47:14.239233   46328 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1101 00:47:14.239206   46328 cache.go:115] /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1101 00:47:14.239242   46328 cache.go:115] /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I1101 00:47:14.239245   46328 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 159.442µs
	I1101 00:47:14.239258   46328 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1101 00:47:14.239121   46328 cache.go:107] acquiring lock: {Name:mka5f0ab2da1dc5a638693142ce88e91287c652f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:47:14.239260   46328 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 101.213µs
	I1101 00:47:14.239236   46328 cache.go:107] acquiring lock: {Name:mkbdd0c8ea71dacd0e5af31f78fa71949ca05500 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:47:14.239271   46328 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I1101 00:47:14.239298   46328 cache.go:115] /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I1101 00:47:14.239303   46328 cache.go:115] /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I1101 00:47:14.239273   46328 cache.go:107] acquiring lock: {Name:mk6ce22992620c1cc00db2dc78a127e8926b2ed9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:47:14.239307   46328 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 196.47µs
	I1101 00:47:14.239311   46328 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 93.322µs
	I1101 00:47:14.239317   46328 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I1101 00:47:14.239320   46328 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I1101 00:47:14.239672   46328 cache.go:107] acquiring lock: {Name:mk3c44c5cab07429fd932b6f2ebab452e32c2e00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:47:14.239811   46328 cache.go:115] /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1101 00:47:14.239840   46328 cache.go:115] /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I1101 00:47:14.239860   46328 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 537.452µs
	I1101 00:47:14.239880   46328 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I1101 00:47:14.239836   46328 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 587.519µs
	I1101 00:47:14.239896   46328 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1101 00:47:14.239919   46328 cache.go:107] acquiring lock: {Name:mk4772d866cb4e6ee70ab1f42ac7ca7ac27f9e6d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:47:14.240081   46328 cache.go:115] /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I1101 00:47:14.240091   46328 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 377.387µs
	I1101 00:47:14.240109   46328 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I1101 00:47:14.240117   46328 cache.go:87] Successfully saved all images to host disk.
	I1101 00:47:14.241498   46328 start.go:365] acquiring machines lock for stopped-upgrade-886496: {Name:mk7aad88408c319111b9be8e59d9593a9e88374b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 00:47:32.011261   46328 start.go:369] acquired machines lock for "stopped-upgrade-886496" in 17.769720288s
	I1101 00:47:32.011332   46328 start.go:96] Skipping create...Using existing machine configuration
	I1101 00:47:32.011352   46328 fix.go:54] fixHost starting: minikube
	I1101 00:47:32.011796   46328 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 00:47:32.011847   46328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:47:32.029574   46328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40979
	I1101 00:47:32.030046   46328 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:47:32.030606   46328 main.go:141] libmachine: Using API Version  1
	I1101 00:47:32.030638   46328 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:47:32.031001   46328 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:47:32.031185   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .DriverName
	I1101 00:47:32.031400   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetState
	I1101 00:47:32.037122   46328 fix.go:102] recreateIfNeeded on stopped-upgrade-886496: state=Stopped err=<nil>
	I1101 00:47:32.037174   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .DriverName
	W1101 00:47:32.037406   46328 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 00:47:32.164433   46328 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-886496" ...
	I1101 00:47:32.227996   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .Start
	I1101 00:47:32.228353   46328 main.go:141] libmachine: (stopped-upgrade-886496) Ensuring networks are active...
	I1101 00:47:32.229385   46328 main.go:141] libmachine: (stopped-upgrade-886496) Ensuring network default is active
	I1101 00:47:32.229790   46328 main.go:141] libmachine: (stopped-upgrade-886496) Ensuring network minikube-net is active
	I1101 00:47:32.230246   46328 main.go:141] libmachine: (stopped-upgrade-886496) Getting domain xml...
	I1101 00:47:32.231091   46328 main.go:141] libmachine: (stopped-upgrade-886496) Creating domain...
	I1101 00:47:33.772221   46328 main.go:141] libmachine: (stopped-upgrade-886496) Waiting to get IP...
	I1101 00:47:33.773300   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:47:33.773919   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | unable to find current IP address of domain stopped-upgrade-886496 in network minikube-net
	I1101 00:47:33.774056   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | I1101 00:47:33.773898   46431 retry.go:31] will retry after 237.288063ms: waiting for machine to come up
	I1101 00:47:34.013541   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:47:34.014370   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | unable to find current IP address of domain stopped-upgrade-886496 in network minikube-net
	I1101 00:47:34.014398   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | I1101 00:47:34.014265   46431 retry.go:31] will retry after 240.799706ms: waiting for machine to come up
	I1101 00:47:34.257107   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:47:34.257875   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | unable to find current IP address of domain stopped-upgrade-886496 in network minikube-net
	I1101 00:47:34.257902   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | I1101 00:47:34.257779   46431 retry.go:31] will retry after 371.29059ms: waiting for machine to come up
	I1101 00:47:34.630338   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:47:34.630930   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | unable to find current IP address of domain stopped-upgrade-886496 in network minikube-net
	I1101 00:47:34.630964   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | I1101 00:47:34.630839   46431 retry.go:31] will retry after 546.740187ms: waiting for machine to come up
	I1101 00:47:35.179356   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:47:35.179891   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | unable to find current IP address of domain stopped-upgrade-886496 in network minikube-net
	I1101 00:47:35.179936   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | I1101 00:47:35.179853   46431 retry.go:31] will retry after 466.304948ms: waiting for machine to come up
	I1101 00:47:35.647650   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:47:35.648194   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | unable to find current IP address of domain stopped-upgrade-886496 in network minikube-net
	I1101 00:47:35.648220   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | I1101 00:47:35.648143   46431 retry.go:31] will retry after 722.324367ms: waiting for machine to come up
	I1101 00:47:36.372765   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:47:36.373269   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | unable to find current IP address of domain stopped-upgrade-886496 in network minikube-net
	I1101 00:47:36.373292   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | I1101 00:47:36.373234   46431 retry.go:31] will retry after 1.182906055s: waiting for machine to come up
	I1101 00:47:37.557765   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:47:37.558389   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | unable to find current IP address of domain stopped-upgrade-886496 in network minikube-net
	I1101 00:47:37.558412   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | I1101 00:47:37.558311   46431 retry.go:31] will retry after 1.020752294s: waiting for machine to come up
	I1101 00:47:38.580744   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:47:38.581406   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | unable to find current IP address of domain stopped-upgrade-886496 in network minikube-net
	I1101 00:47:38.581432   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | I1101 00:47:38.581309   46431 retry.go:31] will retry after 1.854682954s: waiting for machine to come up
	I1101 00:47:40.437722   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:47:40.438180   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | unable to find current IP address of domain stopped-upgrade-886496 in network minikube-net
	I1101 00:47:40.438204   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | I1101 00:47:40.438117   46431 retry.go:31] will retry after 1.734943829s: waiting for machine to come up
	I1101 00:47:42.174672   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:47:42.175167   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | unable to find current IP address of domain stopped-upgrade-886496 in network minikube-net
	I1101 00:47:42.175202   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | I1101 00:47:42.175088   46431 retry.go:31] will retry after 2.403222861s: waiting for machine to come up
	I1101 00:47:44.581226   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:47:44.581643   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | unable to find current IP address of domain stopped-upgrade-886496 in network minikube-net
	I1101 00:47:44.581667   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | I1101 00:47:44.581603   46431 retry.go:31] will retry after 2.916884347s: waiting for machine to come up
	I1101 00:47:47.500682   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:47:47.501172   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | unable to find current IP address of domain stopped-upgrade-886496 in network minikube-net
	I1101 00:47:47.501207   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | I1101 00:47:47.501111   46431 retry.go:31] will retry after 4.483555437s: waiting for machine to come up
	I1101 00:47:51.989178   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:47:51.989643   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | unable to find current IP address of domain stopped-upgrade-886496 in network minikube-net
	I1101 00:47:51.989697   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | I1101 00:47:51.989624   46431 retry.go:31] will retry after 3.790876155s: waiting for machine to come up
	I1101 00:47:55.782803   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:47:55.783401   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | unable to find current IP address of domain stopped-upgrade-886496 in network minikube-net
	I1101 00:47:55.783429   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | I1101 00:47:55.783346   46431 retry.go:31] will retry after 6.392481363s: waiting for machine to come up
	I1101 00:48:02.177370   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:48:02.177925   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | unable to find current IP address of domain stopped-upgrade-886496 in network minikube-net
	I1101 00:48:02.177950   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | I1101 00:48:02.177877   46431 retry.go:31] will retry after 6.873790586s: waiting for machine to come up
	I1101 00:48:09.053030   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:48:09.053485   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | unable to find current IP address of domain stopped-upgrade-886496 in network minikube-net
	I1101 00:48:09.053519   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | I1101 00:48:09.053420   46431 retry.go:31] will retry after 10.872301236s: waiting for machine to come up
	I1101 00:48:19.926995   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:48:19.927503   46328 main.go:141] libmachine: (stopped-upgrade-886496) Found IP for machine: 192.168.50.129
	I1101 00:48:19.927527   46328 main.go:141] libmachine: (stopped-upgrade-886496) Reserving static IP address...
	I1101 00:48:19.927573   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has current primary IP address 192.168.50.129 and MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:48:19.927917   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | unable to find host DHCP lease matching {name: "stopped-upgrade-886496", mac: "84:db:ec:37:45:63", ip: "192.168.50.129"} in network minikube-net
	I1101 00:48:20.007032   46328 main.go:141] libmachine: (stopped-upgrade-886496) Reserved static IP address: 192.168.50.129
	I1101 00:48:20.007079   46328 main.go:141] libmachine: (stopped-upgrade-886496) Waiting for SSH to be available...
	I1101 00:48:20.007092   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | Getting to WaitForSSH function...
	I1101 00:48:20.010286   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:48:20.010703   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | found host DHCP lease matching {name: "", mac: "84:db:ec:37:45:63", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-01 01:43:57 +0000 UTC Type:0 Mac:84:db:ec:37:45:63 Iaid: IPaddr:192.168.50.129 Prefix:24 Hostname:minikube Clientid:01:84:db:ec:37:45:63}
	I1101 00:48:20.010741   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined IP address 192.168.50.129 and MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:48:20.010848   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | Using SSH client type: external
	I1101 00:48:20.010886   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/stopped-upgrade-886496/id_rsa (-rw-------)
	I1101 00:48:20.010920   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.129 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7305/.minikube/machines/stopped-upgrade-886496/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1101 00:48:20.010936   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | About to run SSH command:
	I1101 00:48:20.010980   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | exit 0
	I1101 00:48:20.143781   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | SSH cmd err, output: <nil>: 
	I1101 00:48:20.144145   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetConfigRaw
	I1101 00:48:20.144834   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetIP
	I1101 00:48:20.147892   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:48:20.148505   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | found host DHCP lease matching {name: "", mac: "84:db:ec:37:45:63", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-01 01:48:04 +0000 UTC Type:0 Mac:84:db:ec:37:45:63 Iaid: IPaddr:192.168.50.129 Prefix:24 Hostname:stopped-upgrade-886496 Clientid:01:84:db:ec:37:45:63}
	I1101 00:48:20.148552   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined IP address 192.168.50.129 and MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:48:20.148811   46328 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/stopped-upgrade-886496/config.json ...
	I1101 00:48:20.149029   46328 machine.go:88] provisioning docker machine ...
	I1101 00:48:20.149053   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .DriverName
	I1101 00:48:20.149287   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetMachineName
	I1101 00:48:20.149473   46328 buildroot.go:166] provisioning hostname "stopped-upgrade-886496"
	I1101 00:48:20.149496   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetMachineName
	I1101 00:48:20.149672   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetSSHHostname
	I1101 00:48:20.152384   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:48:20.152755   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | found host DHCP lease matching {name: "", mac: "84:db:ec:37:45:63", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-01 01:48:04 +0000 UTC Type:0 Mac:84:db:ec:37:45:63 Iaid: IPaddr:192.168.50.129 Prefix:24 Hostname:stopped-upgrade-886496 Clientid:01:84:db:ec:37:45:63}
	I1101 00:48:20.152810   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined IP address 192.168.50.129 and MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:48:20.152907   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetSSHPort
	I1101 00:48:20.153093   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetSSHKeyPath
	I1101 00:48:20.153253   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetSSHKeyPath
	I1101 00:48:20.153381   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetSSHUsername
	I1101 00:48:20.153610   46328 main.go:141] libmachine: Using SSH client type: native
	I1101 00:48:20.153960   46328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.129 22 <nil> <nil>}
	I1101 00:48:20.153976   46328 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-886496 && echo "stopped-upgrade-886496" | sudo tee /etc/hostname
	I1101 00:48:20.287380   46328 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-886496
	
	I1101 00:48:20.287421   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetSSHHostname
	I1101 00:48:20.290297   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:48:20.290747   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | found host DHCP lease matching {name: "", mac: "84:db:ec:37:45:63", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-01 01:48:04 +0000 UTC Type:0 Mac:84:db:ec:37:45:63 Iaid: IPaddr:192.168.50.129 Prefix:24 Hostname:stopped-upgrade-886496 Clientid:01:84:db:ec:37:45:63}
	I1101 00:48:20.290792   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined IP address 192.168.50.129 and MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:48:20.291032   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetSSHPort
	I1101 00:48:20.291232   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetSSHKeyPath
	I1101 00:48:20.291407   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetSSHKeyPath
	I1101 00:48:20.291591   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetSSHUsername
	I1101 00:48:20.291778   46328 main.go:141] libmachine: Using SSH client type: native
	I1101 00:48:20.292163   46328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.129 22 <nil> <nil>}
	I1101 00:48:20.292190   46328 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-886496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-886496/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-886496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 00:48:20.420148   46328 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 00:48:20.420189   46328 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7305/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7305/.minikube}
	I1101 00:48:20.420212   46328 buildroot.go:174] setting up certificates
	I1101 00:48:20.420222   46328 provision.go:83] configureAuth start
	I1101 00:48:20.420233   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetMachineName
	I1101 00:48:20.420503   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetIP
	I1101 00:48:20.423068   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:48:20.423443   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | found host DHCP lease matching {name: "", mac: "84:db:ec:37:45:63", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-01 01:48:04 +0000 UTC Type:0 Mac:84:db:ec:37:45:63 Iaid: IPaddr:192.168.50.129 Prefix:24 Hostname:stopped-upgrade-886496 Clientid:01:84:db:ec:37:45:63}
	I1101 00:48:20.423484   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined IP address 192.168.50.129 and MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:48:20.423609   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetSSHHostname
	I1101 00:48:20.426100   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:48:20.426472   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | found host DHCP lease matching {name: "", mac: "84:db:ec:37:45:63", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-01 01:48:04 +0000 UTC Type:0 Mac:84:db:ec:37:45:63 Iaid: IPaddr:192.168.50.129 Prefix:24 Hostname:stopped-upgrade-886496 Clientid:01:84:db:ec:37:45:63}
	I1101 00:48:20.426516   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined IP address 192.168.50.129 and MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:48:20.426636   46328 provision.go:138] copyHostCerts
	I1101 00:48:20.426697   46328 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem, removing ...
	I1101 00:48:20.426707   46328 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 00:48:20.426772   46328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem (1078 bytes)
	I1101 00:48:20.426864   46328 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem, removing ...
	I1101 00:48:20.426871   46328 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 00:48:20.426897   46328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem (1123 bytes)
	I1101 00:48:20.426993   46328 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem, removing ...
	I1101 00:48:20.427004   46328 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 00:48:20.427025   46328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem (1675 bytes)
	I1101 00:48:20.427089   46328 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-886496 san=[192.168.50.129 192.168.50.129 localhost 127.0.0.1 minikube stopped-upgrade-886496]
	I1101 00:48:20.647053   46328 provision.go:172] copyRemoteCerts
	I1101 00:48:20.647119   46328 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 00:48:20.647141   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetSSHHostname
	I1101 00:48:20.650194   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:48:20.650670   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | found host DHCP lease matching {name: "", mac: "84:db:ec:37:45:63", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-01 01:48:04 +0000 UTC Type:0 Mac:84:db:ec:37:45:63 Iaid: IPaddr:192.168.50.129 Prefix:24 Hostname:stopped-upgrade-886496 Clientid:01:84:db:ec:37:45:63}
	I1101 00:48:20.650706   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined IP address 192.168.50.129 and MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:48:20.650838   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetSSHPort
	I1101 00:48:20.651068   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetSSHKeyPath
	I1101 00:48:20.651250   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetSSHUsername
	I1101 00:48:20.651389   46328 sshutil.go:53] new ssh client: &{IP:192.168.50.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/stopped-upgrade-886496/id_rsa Username:docker}
	I1101 00:48:20.738505   46328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 00:48:20.753919   46328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1101 00:48:20.768405   46328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 00:48:20.785055   46328 provision.go:86] duration metric: configureAuth took 364.816194ms
	I1101 00:48:20.785103   46328 buildroot.go:189] setting minikube options for container-runtime
	I1101 00:48:20.785275   46328 config.go:182] Loaded profile config "stopped-upgrade-886496": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1101 00:48:20.785344   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetSSHHostname
	I1101 00:48:20.788404   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:48:20.788868   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | found host DHCP lease matching {name: "", mac: "84:db:ec:37:45:63", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-01 01:48:04 +0000 UTC Type:0 Mac:84:db:ec:37:45:63 Iaid: IPaddr:192.168.50.129 Prefix:24 Hostname:stopped-upgrade-886496 Clientid:01:84:db:ec:37:45:63}
	I1101 00:48:20.788911   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined IP address 192.168.50.129 and MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:48:20.789092   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetSSHPort
	I1101 00:48:20.789318   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetSSHKeyPath
	I1101 00:48:20.789475   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetSSHKeyPath
	I1101 00:48:20.789635   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetSSHUsername
	I1101 00:48:20.789791   46328 main.go:141] libmachine: Using SSH client type: native
	I1101 00:48:20.790159   46328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.129 22 <nil> <nil>}
	I1101 00:48:20.790179   46328 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 00:48:21.680876   46328 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 00:48:21.680907   46328 machine.go:91] provisioned docker machine in 1.531863099s
	I1101 00:48:21.680919   46328 start.go:300] post-start starting for "stopped-upgrade-886496" (driver="kvm2")
	I1101 00:48:21.680931   46328 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 00:48:21.680976   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .DriverName
	I1101 00:48:21.681338   46328 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 00:48:21.681373   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetSSHHostname
	I1101 00:48:21.684505   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:48:21.684898   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | found host DHCP lease matching {name: "", mac: "84:db:ec:37:45:63", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-01 01:48:04 +0000 UTC Type:0 Mac:84:db:ec:37:45:63 Iaid: IPaddr:192.168.50.129 Prefix:24 Hostname:stopped-upgrade-886496 Clientid:01:84:db:ec:37:45:63}
	I1101 00:48:21.684927   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined IP address 192.168.50.129 and MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:48:21.685119   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetSSHPort
	I1101 00:48:21.685366   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetSSHKeyPath
	I1101 00:48:21.685648   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetSSHUsername
	I1101 00:48:21.685809   46328 sshutil.go:53] new ssh client: &{IP:192.168.50.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/stopped-upgrade-886496/id_rsa Username:docker}
	I1101 00:48:21.774883   46328 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 00:48:21.779011   46328 info.go:137] Remote host: Buildroot 2019.02.7
	I1101 00:48:21.779040   46328 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/addons for local assets ...
	I1101 00:48:21.779126   46328 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/files for local assets ...
	I1101 00:48:21.779217   46328 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> 145042.pem in /etc/ssl/certs
	I1101 00:48:21.779304   46328 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 00:48:21.785263   46328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /etc/ssl/certs/145042.pem (1708 bytes)
	I1101 00:48:21.799391   46328 start.go:303] post-start completed in 118.456619ms
	I1101 00:48:21.799415   46328 fix.go:56] fixHost completed within 49.788076087s
	I1101 00:48:21.799435   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetSSHHostname
	I1101 00:48:21.802221   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:48:21.802669   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | found host DHCP lease matching {name: "", mac: "84:db:ec:37:45:63", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-01 01:48:04 +0000 UTC Type:0 Mac:84:db:ec:37:45:63 Iaid: IPaddr:192.168.50.129 Prefix:24 Hostname:stopped-upgrade-886496 Clientid:01:84:db:ec:37:45:63}
	I1101 00:48:21.802702   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined IP address 192.168.50.129 and MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:48:21.802913   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetSSHPort
	I1101 00:48:21.803176   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetSSHKeyPath
	I1101 00:48:21.803356   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetSSHKeyPath
	I1101 00:48:21.803524   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetSSHUsername
	I1101 00:48:21.803742   46328 main.go:141] libmachine: Using SSH client type: native
	I1101 00:48:21.804217   46328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.129 22 <nil> <nil>}
	I1101 00:48:21.804234   46328 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1101 00:48:21.928427   46328 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698799701.888256784
	
	I1101 00:48:21.928450   46328 fix.go:206] guest clock: 1698799701.888256784
	I1101 00:48:21.928460   46328 fix.go:219] Guest: 2023-11-01 00:48:21.888256784 +0000 UTC Remote: 2023-11-01 00:48:21.799418733 +0000 UTC m=+68.132667365 (delta=88.838051ms)
	I1101 00:48:21.928509   46328 fix.go:190] guest clock delta is within tolerance: 88.838051ms
	I1101 00:48:21.928528   46328 start.go:83] releasing machines lock for "stopped-upgrade-886496", held for 49.917209551s
	I1101 00:48:21.928656   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .DriverName
	I1101 00:48:21.928966   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetIP
	I1101 00:48:21.932315   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:48:21.932811   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | found host DHCP lease matching {name: "", mac: "84:db:ec:37:45:63", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-01 01:48:04 +0000 UTC Type:0 Mac:84:db:ec:37:45:63 Iaid: IPaddr:192.168.50.129 Prefix:24 Hostname:stopped-upgrade-886496 Clientid:01:84:db:ec:37:45:63}
	I1101 00:48:21.932841   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined IP address 192.168.50.129 and MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:48:21.933102   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .DriverName
	I1101 00:48:21.933812   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .DriverName
	I1101 00:48:21.934036   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .DriverName
	I1101 00:48:21.934137   46328 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 00:48:21.934186   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetSSHHostname
	I1101 00:48:21.934278   46328 ssh_runner.go:195] Run: cat /version.json
	I1101 00:48:21.934305   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetSSHHostname
	I1101 00:48:21.937021   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:48:21.937375   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | found host DHCP lease matching {name: "", mac: "84:db:ec:37:45:63", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-01 01:48:04 +0000 UTC Type:0 Mac:84:db:ec:37:45:63 Iaid: IPaddr:192.168.50.129 Prefix:24 Hostname:stopped-upgrade-886496 Clientid:01:84:db:ec:37:45:63}
	I1101 00:48:21.937413   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined IP address 192.168.50.129 and MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:48:21.937432   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:48:21.937610   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetSSHPort
	I1101 00:48:21.937872   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetSSHKeyPath
	I1101 00:48:21.937877   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | found host DHCP lease matching {name: "", mac: "84:db:ec:37:45:63", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-01 01:48:04 +0000 UTC Type:0 Mac:84:db:ec:37:45:63 Iaid: IPaddr:192.168.50.129 Prefix:24 Hostname:stopped-upgrade-886496 Clientid:01:84:db:ec:37:45:63}
	I1101 00:48:21.937906   46328 main.go:141] libmachine: (stopped-upgrade-886496) DBG | domain stopped-upgrade-886496 has defined IP address 192.168.50.129 and MAC address 84:db:ec:37:45:63 in network minikube-net
	I1101 00:48:21.938042   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetSSHUsername
	I1101 00:48:21.938172   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetSSHPort
	I1101 00:48:21.938254   46328 sshutil.go:53] new ssh client: &{IP:192.168.50.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/stopped-upgrade-886496/id_rsa Username:docker}
	I1101 00:48:21.938319   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetSSHKeyPath
	I1101 00:48:21.938454   46328 main.go:141] libmachine: (stopped-upgrade-886496) Calling .GetSSHUsername
	I1101 00:48:21.938588   46328 sshutil.go:53] new ssh client: &{IP:192.168.50.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/stopped-upgrade-886496/id_rsa Username:docker}
	W1101 00:48:22.068068   46328 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1101 00:48:22.068147   46328 ssh_runner.go:195] Run: systemctl --version
	I1101 00:48:22.073511   46328 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 00:48:22.216939   46328 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 00:48:22.222683   46328 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 00:48:22.222751   46328 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 00:48:22.228321   46328 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 00:48:22.228344   46328 start.go:472] detecting cgroup driver to use...
	I1101 00:48:22.228404   46328 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 00:48:22.239657   46328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 00:48:22.252212   46328 docker.go:204] disabling cri-docker service (if available) ...
	I1101 00:48:22.252278   46328 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 00:48:22.263281   46328 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 00:48:22.274341   46328 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1101 00:48:22.282736   46328 docker.go:214] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1101 00:48:22.282801   46328 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 00:48:22.364370   46328 docker.go:220] disabling docker service ...
	I1101 00:48:22.364443   46328 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 00:48:22.376299   46328 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 00:48:22.384724   46328 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 00:48:22.469671   46328 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 00:48:22.549301   46328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 00:48:22.559141   46328 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 00:48:22.571078   46328 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1101 00:48:22.571144   46328 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:48:22.579739   46328 out.go:177] 
	W1101 00:48:22.581425   46328 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1101 00:48:22.581449   46328 out.go:239] * 
	* 
	W1101 00:48:22.582293   46328 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 00:48:22.584382   46328 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-886496 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (302.30s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (38.47s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-582989 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-582989 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (34.133587888s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-582989] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17486
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17486-7305/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7305/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node pause-582989 in cluster pause-582989
	* Updating the running kvm2 "pause-582989" VM ...
	* Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-582989" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 00:45:49.909326   44050 out.go:296] Setting OutFile to fd 1 ...
	I1101 00:45:49.909495   44050 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:45:49.909508   44050 out.go:309] Setting ErrFile to fd 2...
	I1101 00:45:49.909515   44050 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:45:49.909695   44050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7305/.minikube/bin
	I1101 00:45:49.910253   44050 out.go:303] Setting JSON to false
	I1101 00:45:49.911273   44050 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5295,"bootTime":1698794255,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 00:45:49.911333   44050 start.go:138] virtualization: kvm guest
	I1101 00:45:49.914161   44050 out.go:177] * [pause-582989] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1101 00:45:49.916142   44050 out.go:177]   - MINIKUBE_LOCATION=17486
	I1101 00:45:49.916194   44050 notify.go:220] Checking for updates...
	I1101 00:45:49.917984   44050 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 00:45:49.919812   44050 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 00:45:49.921892   44050 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7305/.minikube
	I1101 00:45:49.923461   44050 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 00:45:49.925250   44050 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 00:45:49.928057   44050 config.go:182] Loaded profile config "pause-582989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:45:49.928572   44050 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 00:45:49.928634   44050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:45:49.945908   44050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37087
	I1101 00:45:49.946410   44050 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:45:49.946998   44050 main.go:141] libmachine: Using API Version  1
	I1101 00:45:49.947033   44050 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:45:49.947404   44050 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:45:49.947608   44050 main.go:141] libmachine: (pause-582989) Calling .DriverName
	I1101 00:45:49.947910   44050 driver.go:378] Setting default libvirt URI to qemu:///system
	I1101 00:45:49.948350   44050 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 00:45:49.948409   44050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:45:49.963546   44050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34707
	I1101 00:45:49.964078   44050 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:45:49.964647   44050 main.go:141] libmachine: Using API Version  1
	I1101 00:45:49.964669   44050 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:45:49.965098   44050 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:45:49.965306   44050 main.go:141] libmachine: (pause-582989) Calling .DriverName
	I1101 00:45:50.008360   44050 out.go:177] * Using the kvm2 driver based on existing profile
	I1101 00:45:50.009822   44050 start.go:298] selected driver: kvm2
	I1101 00:45:50.009838   44050 start.go:902] validating driver "kvm2" against &{Name:pause-582989 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.3 ClusterName:pause-582989 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.166 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-install
er:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:45:50.010008   44050 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 00:45:50.010304   44050 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:45:50.010402   44050 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17486-7305/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1101 00:45:50.028420   44050 install.go:137] /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1101 00:45:50.029511   44050 cni.go:84] Creating CNI manager for ""
	I1101 00:45:50.029536   44050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 00:45:50.029553   44050 start_flags.go:323] config:
	{Name:pause-582989 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:pause-582989 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.166 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false po
rtainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:45:50.029841   44050 iso.go:125] acquiring lock: {Name:mk1f649ca0b7c1ae293cd66cb85f9eeda028b20b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:45:50.033051   44050 out.go:177] * Starting control plane node pause-582989 in cluster pause-582989
	I1101 00:45:50.035168   44050 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 00:45:50.035230   44050 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1101 00:45:50.035239   44050 cache.go:56] Caching tarball of preloaded images
	I1101 00:45:50.035369   44050 preload.go:174] Found /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 00:45:50.035385   44050 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1101 00:45:50.035580   44050 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/pause-582989/config.json ...
	I1101 00:45:50.035871   44050 start.go:365] acquiring machines lock for pause-582989: {Name:mk7aad88408c319111b9be8e59d9593a9e88374b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 00:45:50.035957   44050 start.go:369] acquired machines lock for "pause-582989" in 52.258µs
	I1101 00:45:50.035988   44050 start.go:96] Skipping create...Using existing machine configuration
	I1101 00:45:50.035999   44050 fix.go:54] fixHost starting: 
	I1101 00:45:50.036380   44050 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 00:45:50.036429   44050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:45:50.052890   44050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35841
	I1101 00:45:50.053422   44050 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:45:50.053985   44050 main.go:141] libmachine: Using API Version  1
	I1101 00:45:50.054013   44050 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:45:50.054336   44050 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:45:50.054692   44050 main.go:141] libmachine: (pause-582989) Calling .DriverName
	I1101 00:45:50.054869   44050 main.go:141] libmachine: (pause-582989) Calling .GetState
	I1101 00:45:50.056720   44050 fix.go:102] recreateIfNeeded on pause-582989: state=Running err=<nil>
	W1101 00:45:50.056743   44050 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 00:45:50.058773   44050 out.go:177] * Updating the running kvm2 "pause-582989" VM ...
	I1101 00:45:50.060316   44050 machine.go:88] provisioning docker machine ...
	I1101 00:45:50.060354   44050 main.go:141] libmachine: (pause-582989) Calling .DriverName
	I1101 00:45:50.060643   44050 main.go:141] libmachine: (pause-582989) Calling .GetMachineName
	I1101 00:45:50.060815   44050 buildroot.go:166] provisioning hostname "pause-582989"
	I1101 00:45:50.060848   44050 main.go:141] libmachine: (pause-582989) Calling .GetMachineName
	I1101 00:45:50.061018   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHHostname
	I1101 00:45:50.064116   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:50.064710   44050 main.go:141] libmachine: (pause-582989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:43:81", ip: ""} in network mk-pause-582989: {Iface:virbr3 ExpiryTime:2023-11-01 01:45:02 +0000 UTC Type:0 Mac:52:54:00:5e:43:81 Iaid: IPaddr:192.168.83.166 Prefix:24 Hostname:pause-582989 Clientid:01:52:54:00:5e:43:81}
	I1101 00:45:50.064743   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined IP address 192.168.83.166 and MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:50.064941   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHPort
	I1101 00:45:50.065146   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHKeyPath
	I1101 00:45:50.065305   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHKeyPath
	I1101 00:45:50.065474   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHUsername
	I1101 00:45:50.065669   44050 main.go:141] libmachine: Using SSH client type: native
	I1101 00:45:50.066043   44050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.83.166 22 <nil> <nil>}
	I1101 00:45:50.066062   44050 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-582989 && echo "pause-582989" | sudo tee /etc/hostname
	I1101 00:45:50.201985   44050 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-582989
	
	I1101 00:45:50.202025   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHHostname
	I1101 00:45:50.205426   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:50.205808   44050 main.go:141] libmachine: (pause-582989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:43:81", ip: ""} in network mk-pause-582989: {Iface:virbr3 ExpiryTime:2023-11-01 01:45:02 +0000 UTC Type:0 Mac:52:54:00:5e:43:81 Iaid: IPaddr:192.168.83.166 Prefix:24 Hostname:pause-582989 Clientid:01:52:54:00:5e:43:81}
	I1101 00:45:50.205854   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined IP address 192.168.83.166 and MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:50.206080   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHPort
	I1101 00:45:50.206277   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHKeyPath
	I1101 00:45:50.206421   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHKeyPath
	I1101 00:45:50.206657   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHUsername
	I1101 00:45:50.206879   44050 main.go:141] libmachine: Using SSH client type: native
	I1101 00:45:50.207376   44050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.83.166 22 <nil> <nil>}
	I1101 00:45:50.207404   44050 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-582989' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-582989/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-582989' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 00:45:50.337919   44050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 00:45:50.337952   44050 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7305/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7305/.minikube}
	I1101 00:45:50.337977   44050 buildroot.go:174] setting up certificates
	I1101 00:45:50.337987   44050 provision.go:83] configureAuth start
	I1101 00:45:50.337997   44050 main.go:141] libmachine: (pause-582989) Calling .GetMachineName
	I1101 00:45:50.338365   44050 main.go:141] libmachine: (pause-582989) Calling .GetIP
	I1101 00:45:50.341882   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:50.342333   44050 main.go:141] libmachine: (pause-582989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:43:81", ip: ""} in network mk-pause-582989: {Iface:virbr3 ExpiryTime:2023-11-01 01:45:02 +0000 UTC Type:0 Mac:52:54:00:5e:43:81 Iaid: IPaddr:192.168.83.166 Prefix:24 Hostname:pause-582989 Clientid:01:52:54:00:5e:43:81}
	I1101 00:45:50.342380   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined IP address 192.168.83.166 and MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:50.342706   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHHostname
	I1101 00:45:50.345420   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:50.345844   44050 main.go:141] libmachine: (pause-582989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:43:81", ip: ""} in network mk-pause-582989: {Iface:virbr3 ExpiryTime:2023-11-01 01:45:02 +0000 UTC Type:0 Mac:52:54:00:5e:43:81 Iaid: IPaddr:192.168.83.166 Prefix:24 Hostname:pause-582989 Clientid:01:52:54:00:5e:43:81}
	I1101 00:45:50.345877   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined IP address 192.168.83.166 and MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:50.346056   44050 provision.go:138] copyHostCerts
	I1101 00:45:50.346132   44050 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem, removing ...
	I1101 00:45:50.346157   44050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 00:45:50.346232   44050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem (1078 bytes)
	I1101 00:45:50.346420   44050 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem, removing ...
	I1101 00:45:50.346436   44050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 00:45:50.346469   44050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem (1123 bytes)
	I1101 00:45:50.346554   44050 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem, removing ...
	I1101 00:45:50.346567   44050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 00:45:50.346601   44050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem (1675 bytes)
	I1101 00:45:50.346670   44050 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem org=jenkins.pause-582989 san=[192.168.83.166 192.168.83.166 localhost 127.0.0.1 minikube pause-582989]
	I1101 00:45:50.484167   44050 provision.go:172] copyRemoteCerts
	I1101 00:45:50.484224   44050 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 00:45:50.484247   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHHostname
	I1101 00:45:50.487514   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:50.488020   44050 main.go:141] libmachine: (pause-582989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:43:81", ip: ""} in network mk-pause-582989: {Iface:virbr3 ExpiryTime:2023-11-01 01:45:02 +0000 UTC Type:0 Mac:52:54:00:5e:43:81 Iaid: IPaddr:192.168.83.166 Prefix:24 Hostname:pause-582989 Clientid:01:52:54:00:5e:43:81}
	I1101 00:45:50.488062   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined IP address 192.168.83.166 and MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:50.488305   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHPort
	I1101 00:45:50.488511   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHKeyPath
	I1101 00:45:50.488686   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHUsername
	I1101 00:45:50.488843   44050 sshutil.go:53] new ssh client: &{IP:192.168.83.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/pause-582989/id_rsa Username:docker}
	I1101 00:45:50.581873   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1101 00:45:50.613798   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 00:45:50.649371   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 00:45:50.679006   44050 provision.go:86] duration metric: configureAuth took 341.003037ms
	I1101 00:45:50.679040   44050 buildroot.go:189] setting minikube options for container-runtime
	I1101 00:45:50.679330   44050 config.go:182] Loaded profile config "pause-582989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:45:50.679427   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHHostname
	I1101 00:45:50.682431   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:50.682957   44050 main.go:141] libmachine: (pause-582989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:43:81", ip: ""} in network mk-pause-582989: {Iface:virbr3 ExpiryTime:2023-11-01 01:45:02 +0000 UTC Type:0 Mac:52:54:00:5e:43:81 Iaid: IPaddr:192.168.83.166 Prefix:24 Hostname:pause-582989 Clientid:01:52:54:00:5e:43:81}
	I1101 00:45:50.683002   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined IP address 192.168.83.166 and MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:50.683289   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHPort
	I1101 00:45:50.683552   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHKeyPath
	I1101 00:45:50.683737   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHKeyPath
	I1101 00:45:50.683976   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHUsername
	I1101 00:45:50.684215   44050 main.go:141] libmachine: Using SSH client type: native
	I1101 00:45:50.684701   44050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.83.166 22 <nil> <nil>}
	I1101 00:45:50.684728   44050 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 00:45:56.347433   44050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 00:45:56.347476   44050 machine.go:91] provisioned docker machine in 6.28713445s
	I1101 00:45:56.347489   44050 start.go:300] post-start starting for "pause-582989" (driver="kvm2")
	I1101 00:45:56.347502   44050 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 00:45:56.347526   44050 main.go:141] libmachine: (pause-582989) Calling .DriverName
	I1101 00:45:56.348049   44050 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 00:45:56.348077   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHHostname
	I1101 00:45:56.351396   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:56.351841   44050 main.go:141] libmachine: (pause-582989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:43:81", ip: ""} in network mk-pause-582989: {Iface:virbr3 ExpiryTime:2023-11-01 01:45:02 +0000 UTC Type:0 Mac:52:54:00:5e:43:81 Iaid: IPaddr:192.168.83.166 Prefix:24 Hostname:pause-582989 Clientid:01:52:54:00:5e:43:81}
	I1101 00:45:56.351875   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined IP address 192.168.83.166 and MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:56.352095   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHPort
	I1101 00:45:56.352316   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHKeyPath
	I1101 00:45:56.352470   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHUsername
	I1101 00:45:56.352624   44050 sshutil.go:53] new ssh client: &{IP:192.168.83.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/pause-582989/id_rsa Username:docker}
	I1101 00:45:56.445948   44050 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 00:45:56.450833   44050 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 00:45:56.450865   44050 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/addons for local assets ...
	I1101 00:45:56.450968   44050 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/files for local assets ...
	I1101 00:45:56.451060   44050 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> 145042.pem in /etc/ssl/certs
	I1101 00:45:56.451177   44050 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 00:45:56.460662   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /etc/ssl/certs/145042.pem (1708 bytes)
	I1101 00:45:56.484778   44050 start.go:303] post-start completed in 137.270757ms
	I1101 00:45:56.484811   44050 fix.go:56] fixHost completed within 6.448811795s
	I1101 00:45:56.484840   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHHostname
	I1101 00:45:56.488153   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:56.488557   44050 main.go:141] libmachine: (pause-582989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:43:81", ip: ""} in network mk-pause-582989: {Iface:virbr3 ExpiryTime:2023-11-01 01:45:02 +0000 UTC Type:0 Mac:52:54:00:5e:43:81 Iaid: IPaddr:192.168.83.166 Prefix:24 Hostname:pause-582989 Clientid:01:52:54:00:5e:43:81}
	I1101 00:45:56.488593   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined IP address 192.168.83.166 and MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:56.488765   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHPort
	I1101 00:45:56.488978   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHKeyPath
	I1101 00:45:56.489134   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHKeyPath
	I1101 00:45:56.489304   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHUsername
	I1101 00:45:56.489427   44050 main.go:141] libmachine: Using SSH client type: native
	I1101 00:45:56.489776   44050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.83.166 22 <nil> <nil>}
	I1101 00:45:56.489795   44050 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1101 00:45:56.609023   44050 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698799556.602533926
	
	I1101 00:45:56.609053   44050 fix.go:206] guest clock: 1698799556.602533926
	I1101 00:45:56.609064   44050 fix.go:219] Guest: 2023-11-01 00:45:56.602533926 +0000 UTC Remote: 2023-11-01 00:45:56.484817337 +0000 UTC m=+6.632414356 (delta=117.716589ms)
	I1101 00:45:56.609102   44050 fix.go:190] guest clock delta is within tolerance: 117.716589ms
	I1101 00:45:56.609107   44050 start.go:83] releasing machines lock for "pause-582989", held for 6.573137262s
	I1101 00:45:56.609128   44050 main.go:141] libmachine: (pause-582989) Calling .DriverName
	I1101 00:45:56.609414   44050 main.go:141] libmachine: (pause-582989) Calling .GetIP
	I1101 00:45:56.612117   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:56.612457   44050 main.go:141] libmachine: (pause-582989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:43:81", ip: ""} in network mk-pause-582989: {Iface:virbr3 ExpiryTime:2023-11-01 01:45:02 +0000 UTC Type:0 Mac:52:54:00:5e:43:81 Iaid: IPaddr:192.168.83.166 Prefix:24 Hostname:pause-582989 Clientid:01:52:54:00:5e:43:81}
	I1101 00:45:56.612497   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined IP address 192.168.83.166 and MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:56.612654   44050 main.go:141] libmachine: (pause-582989) Calling .DriverName
	I1101 00:45:56.613281   44050 main.go:141] libmachine: (pause-582989) Calling .DriverName
	I1101 00:45:56.613485   44050 main.go:141] libmachine: (pause-582989) Calling .DriverName
	I1101 00:45:56.613597   44050 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 00:45:56.613645   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHHostname
	I1101 00:45:56.613766   44050 ssh_runner.go:195] Run: cat /version.json
	I1101 00:45:56.613793   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHHostname
	I1101 00:45:56.616874   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:56.617182   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:56.617394   44050 main.go:141] libmachine: (pause-582989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:43:81", ip: ""} in network mk-pause-582989: {Iface:virbr3 ExpiryTime:2023-11-01 01:45:02 +0000 UTC Type:0 Mac:52:54:00:5e:43:81 Iaid: IPaddr:192.168.83.166 Prefix:24 Hostname:pause-582989 Clientid:01:52:54:00:5e:43:81}
	I1101 00:45:56.617425   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined IP address 192.168.83.166 and MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:56.617611   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHPort
	I1101 00:45:56.617682   44050 main.go:141] libmachine: (pause-582989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:43:81", ip: ""} in network mk-pause-582989: {Iface:virbr3 ExpiryTime:2023-11-01 01:45:02 +0000 UTC Type:0 Mac:52:54:00:5e:43:81 Iaid: IPaddr:192.168.83.166 Prefix:24 Hostname:pause-582989 Clientid:01:52:54:00:5e:43:81}
	I1101 00:45:56.617711   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined IP address 192.168.83.166 and MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:56.617811   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHKeyPath
	I1101 00:45:56.617894   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHPort
	I1101 00:45:56.618101   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHUsername
	I1101 00:45:56.618146   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHKeyPath
	I1101 00:45:56.618263   44050 sshutil.go:53] new ssh client: &{IP:192.168.83.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/pause-582989/id_rsa Username:docker}
	I1101 00:45:56.618325   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHUsername
	I1101 00:45:56.618450   44050 sshutil.go:53] new ssh client: &{IP:192.168.83.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/pause-582989/id_rsa Username:docker}
	I1101 00:45:56.700729   44050 ssh_runner.go:195] Run: systemctl --version
	I1101 00:45:56.743523   44050 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 00:45:56.889963   44050 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 00:45:56.895974   44050 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 00:45:56.896064   44050 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 00:45:56.904255   44050 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 00:45:56.904279   44050 start.go:472] detecting cgroup driver to use...
	I1101 00:45:56.904362   44050 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 00:45:56.921160   44050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 00:45:56.934532   44050 docker.go:204] disabling cri-docker service (if available) ...
	I1101 00:45:56.934617   44050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 00:45:56.950382   44050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 00:45:56.969459   44050 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 00:45:57.131197   44050 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 00:45:57.273543   44050 docker.go:220] disabling docker service ...
	I1101 00:45:57.273629   44050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 00:45:57.288819   44050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 00:45:57.303092   44050 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 00:45:57.436436   44050 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 00:45:57.587474   44050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 00:45:57.604128   44050 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 00:45:57.627958   44050 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 00:45:57.628031   44050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:45:57.638416   44050 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 00:45:57.638501   44050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:45:57.648465   44050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:45:57.661454   44050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:45:57.817342   44050 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 00:45:57.939500   44050 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 00:45:58.002467   44050 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 00:45:58.051453   44050 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:45:58.281189   44050 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 00:45:59.808061   44050 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.526830778s)
	I1101 00:45:59.808102   44050 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 00:45:59.808170   44050 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 00:45:59.817014   44050 start.go:540] Will wait 60s for crictl version
	I1101 00:45:59.817080   44050 ssh_runner.go:195] Run: which crictl
	I1101 00:45:59.821053   44050 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 00:45:59.864699   44050 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1101 00:45:59.864803   44050 ssh_runner.go:195] Run: crio --version
	I1101 00:45:59.920105   44050 ssh_runner.go:195] Run: crio --version
	I1101 00:45:59.986161   44050 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1101 00:45:59.987762   44050 main.go:141] libmachine: (pause-582989) Calling .GetIP
	I1101 00:45:59.990782   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:59.991170   44050 main.go:141] libmachine: (pause-582989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:43:81", ip: ""} in network mk-pause-582989: {Iface:virbr3 ExpiryTime:2023-11-01 01:45:02 +0000 UTC Type:0 Mac:52:54:00:5e:43:81 Iaid: IPaddr:192.168.83.166 Prefix:24 Hostname:pause-582989 Clientid:01:52:54:00:5e:43:81}
	I1101 00:45:59.991203   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined IP address 192.168.83.166 and MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:59.991488   44050 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1101 00:45:59.996171   44050 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 00:45:59.996220   44050 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 00:46:00.043819   44050 crio.go:496] all images are preloaded for cri-o runtime.
	I1101 00:46:00.043844   44050 crio.go:415] Images already preloaded, skipping extraction
	I1101 00:46:00.043906   44050 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 00:46:00.081400   44050 crio.go:496] all images are preloaded for cri-o runtime.
	I1101 00:46:00.081424   44050 cache_images.go:84] Images are preloaded, skipping loading
	I1101 00:46:00.081490   44050 ssh_runner.go:195] Run: crio config
	I1101 00:46:00.159192   44050 cni.go:84] Creating CNI manager for ""
	I1101 00:46:00.159222   44050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 00:46:00.159243   44050 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 00:46:00.159268   44050 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.166 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-582989 NodeName:pause-582989 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.166"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.166 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 00:46:00.159429   44050 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.166
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-582989"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.166
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.166"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 00:46:00.159538   44050 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=pause-582989 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.166
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:pause-582989 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 00:46:00.159625   44050 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 00:46:00.170620   44050 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 00:46:00.170715   44050 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 00:46:00.180616   44050 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1101 00:46:00.199065   44050 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 00:46:00.216665   44050 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I1101 00:46:00.233251   44050 ssh_runner.go:195] Run: grep 192.168.83.166	control-plane.minikube.internal$ /etc/hosts
	I1101 00:46:00.237214   44050 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/pause-582989 for IP: 192.168.83.166
	I1101 00:46:00.237265   44050 certs.go:190] acquiring lock for shared ca certs: {Name:mk6185b3938c4b51e80f57b7c81926c81b632b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:46:00.237412   44050 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key
	I1101 00:46:00.237459   44050 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key
	I1101 00:46:00.237545   44050 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/pause-582989/client.key
	I1101 00:46:00.237610   44050 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/pause-582989/apiserver.key.bb7cef09
	I1101 00:46:00.237655   44050 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/pause-582989/proxy-client.key
	I1101 00:46:00.237753   44050 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem (1338 bytes)
	W1101 00:46:00.237793   44050 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504_empty.pem, impossibly tiny 0 bytes
	I1101 00:46:00.237806   44050 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 00:46:00.237830   44050 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem (1078 bytes)
	I1101 00:46:00.237854   44050 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem (1123 bytes)
	I1101 00:46:00.237875   44050 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem (1675 bytes)
	I1101 00:46:00.237914   44050 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem (1708 bytes)
	I1101 00:46:00.238471   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/pause-582989/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 00:46:00.261957   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/pause-582989/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 00:46:00.287195   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/pause-582989/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 00:46:00.310489   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/pause-582989/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 00:46:00.333993   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 00:46:00.358502   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 00:46:00.384471   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 00:46:00.408632   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 00:46:00.433073   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem --> /usr/share/ca-certificates/14504.pem (1338 bytes)
	I1101 00:46:00.456266   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /usr/share/ca-certificates/145042.pem (1708 bytes)
	I1101 00:46:00.480370   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 00:46:00.509229   44050 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 00:46:00.529051   44050 ssh_runner.go:195] Run: openssl version
	I1101 00:46:00.536119   44050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14504.pem && ln -fs /usr/share/ca-certificates/14504.pem /etc/ssl/certs/14504.pem"
	I1101 00:46:00.549020   44050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14504.pem
	I1101 00:46:00.554446   44050 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:54 /usr/share/ca-certificates/14504.pem
	I1101 00:46:00.554538   44050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14504.pem
	I1101 00:46:00.560535   44050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14504.pem /etc/ssl/certs/51391683.0"
	I1101 00:46:00.570908   44050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145042.pem && ln -fs /usr/share/ca-certificates/145042.pem /etc/ssl/certs/145042.pem"
	I1101 00:46:00.581251   44050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145042.pem
	I1101 00:46:00.585650   44050 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:54 /usr/share/ca-certificates/145042.pem
	I1101 00:46:00.585727   44050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145042.pem
	I1101 00:46:00.591189   44050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145042.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 00:46:00.603293   44050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 00:46:00.617172   44050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:46:00.623315   44050 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:46:00.623376   44050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:46:00.630382   44050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 00:46:00.640163   44050 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 00:46:00.644574   44050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 00:46:00.651077   44050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 00:46:00.658672   44050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 00:46:00.666213   44050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 00:46:00.673810   44050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 00:46:00.681113   44050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 00:46:00.687062   44050 kubeadm.go:404] StartCluster: {Name:pause-582989 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.3 ClusterName:pause-582989 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.166 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gp
u-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:46:00.687198   44050 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 00:46:00.687255   44050 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 00:46:00.729114   44050 cri.go:89] found id: "e6064425f28dda71209a4dd39d96349d4c310d45fc4827fb223f3e68b9298be6"
	I1101 00:46:00.729136   44050 cri.go:89] found id: "5ef0c9df2cfa1eddf797e9cf626df84a44695ddca78bc29dbaca6cc572a2bd1f"
	I1101 00:46:00.729140   44050 cri.go:89] found id: "83ba5cf1f9ad384bdf7e669ae56276f532bedc64d0f61ac07a53d1079a0c29e3"
	I1101 00:46:00.729145   44050 cri.go:89] found id: "1201aea9235f1fdc9c9b623c75a375f4cc07d43a0bacac462dbb5ab5d01dded9"
	I1101 00:46:00.729148   44050 cri.go:89] found id: "cab178bec38f9a3c5e477d390c044d5d08cf63740f84beb5df8499a8074bad2b"
	I1101 00:46:00.729154   44050 cri.go:89] found id: "aa2851400cfbff707064071609e4b6e34a1316788b5409e084e8d29882ab2e45"
	I1101 00:46:00.729158   44050 cri.go:89] found id: ""
	I1101 00:46:00.729199   44050 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-582989 -n pause-582989
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-582989 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-582989 logs -n 25: (1.498677075s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p cilium-090856 sudo                  | cilium-090856             | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC |                     |
	|         | systemctl cat containerd               |                           |         |                |                     |                     |
	|         | --no-pager                             |                           |         |                |                     |                     |
	| ssh     | -p cilium-090856 sudo cat              | cilium-090856             | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |                |                     |                     |
	| ssh     | -p cilium-090856 sudo cat              | cilium-090856             | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |                |                     |                     |
	| ssh     | -p cilium-090856 sudo                  | cilium-090856             | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC |                     |
	|         | containerd config dump                 |                           |         |                |                     |                     |
	| ssh     | -p cilium-090856 sudo                  | cilium-090856             | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC |                     |
	|         | systemctl status crio --all            |                           |         |                |                     |                     |
	|         | --full --no-pager                      |                           |         |                |                     |                     |
	| ssh     | -p cilium-090856 sudo                  | cilium-090856             | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |                |                     |                     |
	| ssh     | -p cilium-090856 sudo find             | cilium-090856             | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |                |                     |                     |
	| ssh     | -p cilium-090856 sudo crio             | cilium-090856             | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC |                     |
	|         | config                                 |                           |         |                |                     |                     |
	| delete  | -p cilium-090856                       | cilium-090856             | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	| delete  | -p force-systemd-env-256488            | force-systemd-env-256488  | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	| start   | -p cert-expiration-902201              | cert-expiration-902201    | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:44 UTC |
	|         | --memory=2048                          |                           |         |                |                     |                     |
	|         | --cert-expiration=3m                   |                           |         |                |                     |                     |
	|         | --driver=kvm2                          |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	| ssh     | -p NoKubernetes-345470 sudo            | NoKubernetes-345470       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC |                     |
	|         | systemctl is-active --quiet            |                           |         |                |                     |                     |
	|         | service kubelet                        |                           |         |                |                     |                     |
	| delete  | -p NoKubernetes-345470                 | NoKubernetes-345470       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	| delete  | -p running-upgrade-411881              | running-upgrade-411881    | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	| start   | -p force-systemd-flag-644407           | force-systemd-flag-644407 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:44 UTC |
	|         | --memory=2048 --force-systemd          |                           |         |                |                     |                     |
	|         | --alsologtostderr                      |                           |         |                |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	| start   | -p cert-options-406160                 | cert-options-406160       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:45 UTC |
	|         | --memory=2048                          |                           |         |                |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                           |         |                |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                           |         |                |                     |                     |
	|         | --apiserver-names=localhost            |                           |         |                |                     |                     |
	|         | --apiserver-names=www.google.com       |                           |         |                |                     |                     |
	|         | --apiserver-port=8555                  |                           |         |                |                     |                     |
	|         | --driver=kvm2                          |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	| ssh     | force-systemd-flag-644407 ssh cat      | force-systemd-flag-644407 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:44 UTC | 01 Nov 23 00:44 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |                |                     |                     |
	| delete  | -p force-systemd-flag-644407           | force-systemd-flag-644407 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:44 UTC | 01 Nov 23 00:44 UTC |
	| start   | -p pause-582989 --memory=2048          | pause-582989              | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:44 UTC | 01 Nov 23 00:45 UTC |
	|         | --install-addons=false                 |                           |         |                |                     |                     |
	|         | --wait=all --driver=kvm2               |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	| ssh     | cert-options-406160 ssh                | cert-options-406160       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:45 UTC | 01 Nov 23 00:45 UTC |
	|         | openssl x509 -text -noout -in          |                           |         |                |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                           |         |                |                     |                     |
	| ssh     | -p cert-options-406160 -- sudo         | cert-options-406160       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:45 UTC | 01 Nov 23 00:45 UTC |
	|         | cat /etc/kubernetes/admin.conf         |                           |         |                |                     |                     |
	| delete  | -p cert-options-406160                 | cert-options-406160       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:45 UTC | 01 Nov 23 00:45 UTC |
	| start   | -p auto-090856 --memory=3072           | auto-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:45 UTC | 01 Nov 23 00:46 UTC |
	|         | --alsologtostderr --wait=true          |                           |         |                |                     |                     |
	|         | --wait-timeout=15m                     |                           |         |                |                     |                     |
	|         | --driver=kvm2                          |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	| start   | -p pause-582989                        | pause-582989              | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:45 UTC | 01 Nov 23 00:46 UTC |
	|         | --alsologtostderr                      |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	| ssh     | -p auto-090856 pgrep -a                | auto-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:46 UTC | 01 Nov 23 00:46 UTC |
	|         | kubelet                                |                           |         |                |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/01 00:45:49
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 00:45:49.909326   44050 out.go:296] Setting OutFile to fd 1 ...
	I1101 00:45:49.909495   44050 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:45:49.909508   44050 out.go:309] Setting ErrFile to fd 2...
	I1101 00:45:49.909515   44050 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:45:49.909695   44050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7305/.minikube/bin
	I1101 00:45:49.910253   44050 out.go:303] Setting JSON to false
	I1101 00:45:49.911273   44050 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5295,"bootTime":1698794255,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 00:45:49.911333   44050 start.go:138] virtualization: kvm guest
	I1101 00:45:49.914161   44050 out.go:177] * [pause-582989] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1101 00:45:49.916142   44050 out.go:177]   - MINIKUBE_LOCATION=17486
	I1101 00:45:49.916194   44050 notify.go:220] Checking for updates...
	I1101 00:45:49.917984   44050 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 00:45:49.919812   44050 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 00:45:49.921892   44050 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7305/.minikube
	I1101 00:45:49.923461   44050 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 00:45:49.925250   44050 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 00:45:49.928057   44050 config.go:182] Loaded profile config "pause-582989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:45:49.928572   44050 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 00:45:49.928634   44050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:45:49.945908   44050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37087
	I1101 00:45:49.946410   44050 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:45:49.946998   44050 main.go:141] libmachine: Using API Version  1
	I1101 00:45:49.947033   44050 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:45:49.947404   44050 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:45:49.947608   44050 main.go:141] libmachine: (pause-582989) Calling .DriverName
	I1101 00:45:49.947910   44050 driver.go:378] Setting default libvirt URI to qemu:///system
	I1101 00:45:49.948350   44050 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 00:45:49.948409   44050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:45:49.963546   44050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34707
	I1101 00:45:49.964078   44050 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:45:49.964647   44050 main.go:141] libmachine: Using API Version  1
	I1101 00:45:49.964669   44050 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:45:49.965098   44050 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:45:49.965306   44050 main.go:141] libmachine: (pause-582989) Calling .DriverName
	I1101 00:45:50.008360   44050 out.go:177] * Using the kvm2 driver based on existing profile
	I1101 00:45:50.009822   44050 start.go:298] selected driver: kvm2
	I1101 00:45:50.009838   44050 start.go:902] validating driver "kvm2" against &{Name:pause-582989 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.3 ClusterName:pause-582989 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.166 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-install
er:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:45:50.010008   44050 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 00:45:50.010304   44050 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:45:50.010402   44050 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17486-7305/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1101 00:45:50.028420   44050 install.go:137] /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1101 00:45:50.029511   44050 cni.go:84] Creating CNI manager for ""
	I1101 00:45:50.029536   44050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 00:45:50.029553   44050 start_flags.go:323] config:
	{Name:pause-582989 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:pause-582989 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.166 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false po
rtainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:45:50.029841   44050 iso.go:125] acquiring lock: {Name:mk1f649ca0b7c1ae293cd66cb85f9eeda028b20b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:45:50.033051   44050 out.go:177] * Starting control plane node pause-582989 in cluster pause-582989
	I1101 00:45:50.035168   44050 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 00:45:50.035230   44050 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1101 00:45:50.035239   44050 cache.go:56] Caching tarball of preloaded images
	I1101 00:45:50.035369   44050 preload.go:174] Found /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 00:45:50.035385   44050 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1101 00:45:50.035580   44050 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/pause-582989/config.json ...
	I1101 00:45:50.035871   44050 start.go:365] acquiring machines lock for pause-582989: {Name:mk7aad88408c319111b9be8e59d9593a9e88374b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 00:45:50.035957   44050 start.go:369] acquired machines lock for "pause-582989" in 52.258µs
	I1101 00:45:50.035988   44050 start.go:96] Skipping create...Using existing machine configuration
	I1101 00:45:50.035999   44050 fix.go:54] fixHost starting: 
	I1101 00:45:50.036380   44050 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 00:45:50.036429   44050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:45:50.052890   44050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35841
	I1101 00:45:50.053422   44050 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:45:50.053985   44050 main.go:141] libmachine: Using API Version  1
	I1101 00:45:50.054013   44050 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:45:50.054336   44050 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:45:50.054692   44050 main.go:141] libmachine: (pause-582989) Calling .DriverName
	I1101 00:45:50.054869   44050 main.go:141] libmachine: (pause-582989) Calling .GetState
	I1101 00:45:50.056720   44050 fix.go:102] recreateIfNeeded on pause-582989: state=Running err=<nil>
	W1101 00:45:50.056743   44050 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 00:45:50.058773   44050 out.go:177] * Updating the running kvm2 "pause-582989" VM ...
	I1101 00:45:47.724719   43726 out.go:204]   - Booting up control plane ...
	I1101 00:45:47.724885   43726 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 00:45:47.725029   43726 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 00:45:47.725682   43726 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 00:45:47.742040   43726 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 00:45:47.743389   43726 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 00:45:47.743476   43726 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1101 00:45:47.880397   43726 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 00:45:50.060316   44050 machine.go:88] provisioning docker machine ...
	I1101 00:45:50.060354   44050 main.go:141] libmachine: (pause-582989) Calling .DriverName
	I1101 00:45:50.060643   44050 main.go:141] libmachine: (pause-582989) Calling .GetMachineName
	I1101 00:45:50.060815   44050 buildroot.go:166] provisioning hostname "pause-582989"
	I1101 00:45:50.060848   44050 main.go:141] libmachine: (pause-582989) Calling .GetMachineName
	I1101 00:45:50.061018   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHHostname
	I1101 00:45:50.064116   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:50.064710   44050 main.go:141] libmachine: (pause-582989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:43:81", ip: ""} in network mk-pause-582989: {Iface:virbr3 ExpiryTime:2023-11-01 01:45:02 +0000 UTC Type:0 Mac:52:54:00:5e:43:81 Iaid: IPaddr:192.168.83.166 Prefix:24 Hostname:pause-582989 Clientid:01:52:54:00:5e:43:81}
	I1101 00:45:50.064743   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined IP address 192.168.83.166 and MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:50.064941   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHPort
	I1101 00:45:50.065146   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHKeyPath
	I1101 00:45:50.065305   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHKeyPath
	I1101 00:45:50.065474   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHUsername
	I1101 00:45:50.065669   44050 main.go:141] libmachine: Using SSH client type: native
	I1101 00:45:50.066043   44050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.83.166 22 <nil> <nil>}
	I1101 00:45:50.066062   44050 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-582989 && echo "pause-582989" | sudo tee /etc/hostname
	I1101 00:45:50.201985   44050 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-582989
	
	I1101 00:45:50.202025   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHHostname
	I1101 00:45:50.205426   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:50.205808   44050 main.go:141] libmachine: (pause-582989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:43:81", ip: ""} in network mk-pause-582989: {Iface:virbr3 ExpiryTime:2023-11-01 01:45:02 +0000 UTC Type:0 Mac:52:54:00:5e:43:81 Iaid: IPaddr:192.168.83.166 Prefix:24 Hostname:pause-582989 Clientid:01:52:54:00:5e:43:81}
	I1101 00:45:50.205854   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined IP address 192.168.83.166 and MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:50.206080   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHPort
	I1101 00:45:50.206277   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHKeyPath
	I1101 00:45:50.206421   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHKeyPath
	I1101 00:45:50.206657   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHUsername
	I1101 00:45:50.206879   44050 main.go:141] libmachine: Using SSH client type: native
	I1101 00:45:50.207376   44050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.83.166 22 <nil> <nil>}
	I1101 00:45:50.207404   44050 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-582989' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-582989/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-582989' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 00:45:50.337919   44050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 00:45:50.337952   44050 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7305/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7305/.minikube}
	I1101 00:45:50.337977   44050 buildroot.go:174] setting up certificates
	I1101 00:45:50.337987   44050 provision.go:83] configureAuth start
	I1101 00:45:50.337997   44050 main.go:141] libmachine: (pause-582989) Calling .GetMachineName
	I1101 00:45:50.338365   44050 main.go:141] libmachine: (pause-582989) Calling .GetIP
	I1101 00:45:50.341882   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:50.342333   44050 main.go:141] libmachine: (pause-582989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:43:81", ip: ""} in network mk-pause-582989: {Iface:virbr3 ExpiryTime:2023-11-01 01:45:02 +0000 UTC Type:0 Mac:52:54:00:5e:43:81 Iaid: IPaddr:192.168.83.166 Prefix:24 Hostname:pause-582989 Clientid:01:52:54:00:5e:43:81}
	I1101 00:45:50.342380   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined IP address 192.168.83.166 and MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:50.342706   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHHostname
	I1101 00:45:50.345420   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:50.345844   44050 main.go:141] libmachine: (pause-582989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:43:81", ip: ""} in network mk-pause-582989: {Iface:virbr3 ExpiryTime:2023-11-01 01:45:02 +0000 UTC Type:0 Mac:52:54:00:5e:43:81 Iaid: IPaddr:192.168.83.166 Prefix:24 Hostname:pause-582989 Clientid:01:52:54:00:5e:43:81}
	I1101 00:45:50.345877   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined IP address 192.168.83.166 and MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:50.346056   44050 provision.go:138] copyHostCerts
	I1101 00:45:50.346132   44050 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem, removing ...
	I1101 00:45:50.346157   44050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 00:45:50.346232   44050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem (1078 bytes)
	I1101 00:45:50.346420   44050 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem, removing ...
	I1101 00:45:50.346436   44050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 00:45:50.346469   44050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem (1123 bytes)
	I1101 00:45:50.346554   44050 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem, removing ...
	I1101 00:45:50.346567   44050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 00:45:50.346601   44050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem (1675 bytes)
	I1101 00:45:50.346670   44050 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem org=jenkins.pause-582989 san=[192.168.83.166 192.168.83.166 localhost 127.0.0.1 minikube pause-582989]
	I1101 00:45:50.484167   44050 provision.go:172] copyRemoteCerts
	I1101 00:45:50.484224   44050 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 00:45:50.484247   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHHostname
	I1101 00:45:50.487514   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:50.488020   44050 main.go:141] libmachine: (pause-582989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:43:81", ip: ""} in network mk-pause-582989: {Iface:virbr3 ExpiryTime:2023-11-01 01:45:02 +0000 UTC Type:0 Mac:52:54:00:5e:43:81 Iaid: IPaddr:192.168.83.166 Prefix:24 Hostname:pause-582989 Clientid:01:52:54:00:5e:43:81}
	I1101 00:45:50.488062   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined IP address 192.168.83.166 and MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:50.488305   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHPort
	I1101 00:45:50.488511   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHKeyPath
	I1101 00:45:50.488686   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHUsername
	I1101 00:45:50.488843   44050 sshutil.go:53] new ssh client: &{IP:192.168.83.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/pause-582989/id_rsa Username:docker}
	I1101 00:45:50.581873   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1101 00:45:50.613798   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 00:45:50.649371   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 00:45:50.679006   44050 provision.go:86] duration metric: configureAuth took 341.003037ms
	I1101 00:45:50.679040   44050 buildroot.go:189] setting minikube options for container-runtime
	I1101 00:45:50.679330   44050 config.go:182] Loaded profile config "pause-582989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:45:50.679427   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHHostname
	I1101 00:45:50.682431   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:50.682957   44050 main.go:141] libmachine: (pause-582989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:43:81", ip: ""} in network mk-pause-582989: {Iface:virbr3 ExpiryTime:2023-11-01 01:45:02 +0000 UTC Type:0 Mac:52:54:00:5e:43:81 Iaid: IPaddr:192.168.83.166 Prefix:24 Hostname:pause-582989 Clientid:01:52:54:00:5e:43:81}
	I1101 00:45:50.683002   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined IP address 192.168.83.166 and MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:50.683289   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHPort
	I1101 00:45:50.683552   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHKeyPath
	I1101 00:45:50.683737   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHKeyPath
	I1101 00:45:50.683976   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHUsername
	I1101 00:45:50.684215   44050 main.go:141] libmachine: Using SSH client type: native
	I1101 00:45:50.684701   44050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.83.166 22 <nil> <nil>}
	I1101 00:45:50.684728   44050 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 00:45:56.379818   43726 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503810 seconds
	I1101 00:45:56.380016   43726 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 00:45:56.401566   43726 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 00:45:56.938430   43726 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 00:45:56.938734   43726 kubeadm.go:322] [mark-control-plane] Marking the node auto-090856 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 00:45:57.455120   43726 kubeadm.go:322] [bootstrap-token] Using token: b6mxf3.rs0dinkr1zyirwe5
	I1101 00:45:57.456672   43726 out.go:204]   - Configuring RBAC rules ...
	I1101 00:45:57.456826   43726 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 00:45:57.465111   43726 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 00:45:57.474810   43726 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 00:45:57.488186   43726 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 00:45:57.493372   43726 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 00:45:57.501819   43726 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 00:45:57.522367   43726 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 00:45:57.833204   43726 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1101 00:45:57.878960   43726 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1101 00:45:57.880180   43726 kubeadm.go:322] 
	I1101 00:45:57.880271   43726 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1101 00:45:57.880283   43726 kubeadm.go:322] 
	I1101 00:45:57.880403   43726 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1101 00:45:57.880436   43726 kubeadm.go:322] 
	I1101 00:45:57.880528   43726 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1101 00:45:57.880625   43726 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 00:45:57.880711   43726 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 00:45:57.880727   43726 kubeadm.go:322] 
	I1101 00:45:57.880835   43726 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1101 00:45:57.880851   43726 kubeadm.go:322] 
	I1101 00:45:57.880910   43726 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 00:45:57.880920   43726 kubeadm.go:322] 
	I1101 00:45:57.880984   43726 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1101 00:45:57.881071   43726 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 00:45:57.881150   43726 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 00:45:57.881157   43726 kubeadm.go:322] 
	I1101 00:45:57.881256   43726 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 00:45:57.881350   43726 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1101 00:45:57.881358   43726 kubeadm.go:322] 
	I1101 00:45:57.881465   43726 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token b6mxf3.rs0dinkr1zyirwe5 \
	I1101 00:45:57.881604   43726 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 \
	I1101 00:45:57.881629   43726 kubeadm.go:322] 	--control-plane 
	I1101 00:45:57.881635   43726 kubeadm.go:322] 
	I1101 00:45:57.881736   43726 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1101 00:45:57.881743   43726 kubeadm.go:322] 
	I1101 00:45:57.881855   43726 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token b6mxf3.rs0dinkr1zyirwe5 \
	I1101 00:45:57.882000   43726 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 
	I1101 00:45:57.882155   43726 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 00:45:57.882171   43726 cni.go:84] Creating CNI manager for ""
	I1101 00:45:57.882180   43726 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 00:45:57.883936   43726 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 00:45:56.347433   44050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 00:45:56.347476   44050 machine.go:91] provisioned docker machine in 6.28713445s
	I1101 00:45:56.347489   44050 start.go:300] post-start starting for "pause-582989" (driver="kvm2")
	I1101 00:45:56.347502   44050 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 00:45:56.347526   44050 main.go:141] libmachine: (pause-582989) Calling .DriverName
	I1101 00:45:56.348049   44050 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 00:45:56.348077   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHHostname
	I1101 00:45:56.351396   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:56.351841   44050 main.go:141] libmachine: (pause-582989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:43:81", ip: ""} in network mk-pause-582989: {Iface:virbr3 ExpiryTime:2023-11-01 01:45:02 +0000 UTC Type:0 Mac:52:54:00:5e:43:81 Iaid: IPaddr:192.168.83.166 Prefix:24 Hostname:pause-582989 Clientid:01:52:54:00:5e:43:81}
	I1101 00:45:56.351875   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined IP address 192.168.83.166 and MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:56.352095   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHPort
	I1101 00:45:56.352316   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHKeyPath
	I1101 00:45:56.352470   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHUsername
	I1101 00:45:56.352624   44050 sshutil.go:53] new ssh client: &{IP:192.168.83.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/pause-582989/id_rsa Username:docker}
	I1101 00:45:56.445948   44050 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 00:45:56.450833   44050 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 00:45:56.450865   44050 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/addons for local assets ...
	I1101 00:45:56.450968   44050 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/files for local assets ...
	I1101 00:45:56.451060   44050 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> 145042.pem in /etc/ssl/certs
	I1101 00:45:56.451177   44050 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 00:45:56.460662   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /etc/ssl/certs/145042.pem (1708 bytes)
	I1101 00:45:56.484778   44050 start.go:303] post-start completed in 137.270757ms
	I1101 00:45:56.484811   44050 fix.go:56] fixHost completed within 6.448811795s
	I1101 00:45:56.484840   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHHostname
	I1101 00:45:56.488153   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:56.488557   44050 main.go:141] libmachine: (pause-582989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:43:81", ip: ""} in network mk-pause-582989: {Iface:virbr3 ExpiryTime:2023-11-01 01:45:02 +0000 UTC Type:0 Mac:52:54:00:5e:43:81 Iaid: IPaddr:192.168.83.166 Prefix:24 Hostname:pause-582989 Clientid:01:52:54:00:5e:43:81}
	I1101 00:45:56.488593   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined IP address 192.168.83.166 and MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:56.488765   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHPort
	I1101 00:45:56.488978   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHKeyPath
	I1101 00:45:56.489134   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHKeyPath
	I1101 00:45:56.489304   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHUsername
	I1101 00:45:56.489427   44050 main.go:141] libmachine: Using SSH client type: native
	I1101 00:45:56.489776   44050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.83.166 22 <nil> <nil>}
	I1101 00:45:56.489795   44050 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1101 00:45:56.609023   44050 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698799556.602533926
	
	I1101 00:45:56.609053   44050 fix.go:206] guest clock: 1698799556.602533926
	I1101 00:45:56.609064   44050 fix.go:219] Guest: 2023-11-01 00:45:56.602533926 +0000 UTC Remote: 2023-11-01 00:45:56.484817337 +0000 UTC m=+6.632414356 (delta=117.716589ms)
	I1101 00:45:56.609102   44050 fix.go:190] guest clock delta is within tolerance: 117.716589ms
	I1101 00:45:56.609107   44050 start.go:83] releasing machines lock for "pause-582989", held for 6.573137262s
	I1101 00:45:56.609128   44050 main.go:141] libmachine: (pause-582989) Calling .DriverName
	I1101 00:45:56.609414   44050 main.go:141] libmachine: (pause-582989) Calling .GetIP
	I1101 00:45:56.612117   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:56.612457   44050 main.go:141] libmachine: (pause-582989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:43:81", ip: ""} in network mk-pause-582989: {Iface:virbr3 ExpiryTime:2023-11-01 01:45:02 +0000 UTC Type:0 Mac:52:54:00:5e:43:81 Iaid: IPaddr:192.168.83.166 Prefix:24 Hostname:pause-582989 Clientid:01:52:54:00:5e:43:81}
	I1101 00:45:56.612497   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined IP address 192.168.83.166 and MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:56.612654   44050 main.go:141] libmachine: (pause-582989) Calling .DriverName
	I1101 00:45:56.613281   44050 main.go:141] libmachine: (pause-582989) Calling .DriverName
	I1101 00:45:56.613485   44050 main.go:141] libmachine: (pause-582989) Calling .DriverName
	I1101 00:45:56.613597   44050 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 00:45:56.613645   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHHostname
	I1101 00:45:56.613766   44050 ssh_runner.go:195] Run: cat /version.json
	I1101 00:45:56.613793   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHHostname
	I1101 00:45:56.616874   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:56.617182   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:56.617394   44050 main.go:141] libmachine: (pause-582989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:43:81", ip: ""} in network mk-pause-582989: {Iface:virbr3 ExpiryTime:2023-11-01 01:45:02 +0000 UTC Type:0 Mac:52:54:00:5e:43:81 Iaid: IPaddr:192.168.83.166 Prefix:24 Hostname:pause-582989 Clientid:01:52:54:00:5e:43:81}
	I1101 00:45:56.617425   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined IP address 192.168.83.166 and MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:56.617611   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHPort
	I1101 00:45:56.617682   44050 main.go:141] libmachine: (pause-582989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:43:81", ip: ""} in network mk-pause-582989: {Iface:virbr3 ExpiryTime:2023-11-01 01:45:02 +0000 UTC Type:0 Mac:52:54:00:5e:43:81 Iaid: IPaddr:192.168.83.166 Prefix:24 Hostname:pause-582989 Clientid:01:52:54:00:5e:43:81}
	I1101 00:45:56.617711   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined IP address 192.168.83.166 and MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:56.617811   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHKeyPath
	I1101 00:45:56.617894   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHPort
	I1101 00:45:56.618101   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHUsername
	I1101 00:45:56.618146   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHKeyPath
	I1101 00:45:56.618263   44050 sshutil.go:53] new ssh client: &{IP:192.168.83.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/pause-582989/id_rsa Username:docker}
	I1101 00:45:56.618325   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHUsername
	I1101 00:45:56.618450   44050 sshutil.go:53] new ssh client: &{IP:192.168.83.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/pause-582989/id_rsa Username:docker}
	I1101 00:45:56.700729   44050 ssh_runner.go:195] Run: systemctl --version
	I1101 00:45:56.743523   44050 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 00:45:56.889963   44050 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 00:45:56.895974   44050 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 00:45:56.896064   44050 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 00:45:56.904255   44050 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 00:45:56.904279   44050 start.go:472] detecting cgroup driver to use...
	I1101 00:45:56.904362   44050 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 00:45:56.921160   44050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 00:45:56.934532   44050 docker.go:204] disabling cri-docker service (if available) ...
	I1101 00:45:56.934617   44050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 00:45:56.950382   44050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 00:45:56.969459   44050 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 00:45:57.131197   44050 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 00:45:57.273543   44050 docker.go:220] disabling docker service ...
	I1101 00:45:57.273629   44050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 00:45:57.288819   44050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 00:45:57.303092   44050 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 00:45:57.436436   44050 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 00:45:57.587474   44050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 00:45:57.604128   44050 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 00:45:57.627958   44050 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 00:45:57.628031   44050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:45:57.638416   44050 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 00:45:57.638501   44050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:45:57.648465   44050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:45:57.661454   44050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:45:57.817342   44050 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 00:45:57.939500   44050 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 00:45:58.002467   44050 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 00:45:58.051453   44050 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:45:58.281189   44050 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 00:45:59.808061   44050 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.526830778s)
	I1101 00:45:59.808102   44050 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 00:45:59.808170   44050 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 00:45:59.817014   44050 start.go:540] Will wait 60s for crictl version
	I1101 00:45:59.817080   44050 ssh_runner.go:195] Run: which crictl
	I1101 00:45:59.821053   44050 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 00:45:59.864699   44050 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1101 00:45:59.864803   44050 ssh_runner.go:195] Run: crio --version
	I1101 00:45:59.920105   44050 ssh_runner.go:195] Run: crio --version
	I1101 00:45:59.986161   44050 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1101 00:45:57.885324   43726 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 00:45:57.910278   43726 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 00:45:57.981299   43726 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 00:45:57.981383   43726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:45:57.981389   43726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9 minikube.k8s.io/name=auto-090856 minikube.k8s.io/updated_at=2023_11_01T00_45_57_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:45:58.337629   43726 ops.go:34] apiserver oom_adj: -16
	I1101 00:45:58.337744   43726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:45:58.486683   43726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:45:59.092793   43726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:45:59.593134   43726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:00.092629   43726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:45:59.987762   44050 main.go:141] libmachine: (pause-582989) Calling .GetIP
	I1101 00:45:59.990782   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:59.991170   44050 main.go:141] libmachine: (pause-582989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:43:81", ip: ""} in network mk-pause-582989: {Iface:virbr3 ExpiryTime:2023-11-01 01:45:02 +0000 UTC Type:0 Mac:52:54:00:5e:43:81 Iaid: IPaddr:192.168.83.166 Prefix:24 Hostname:pause-582989 Clientid:01:52:54:00:5e:43:81}
	I1101 00:45:59.991203   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined IP address 192.168.83.166 and MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:59.991488   44050 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1101 00:45:59.996171   44050 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 00:45:59.996220   44050 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 00:46:00.043819   44050 crio.go:496] all images are preloaded for cri-o runtime.
	I1101 00:46:00.043844   44050 crio.go:415] Images already preloaded, skipping extraction
	I1101 00:46:00.043906   44050 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 00:46:00.081400   44050 crio.go:496] all images are preloaded for cri-o runtime.
	I1101 00:46:00.081424   44050 cache_images.go:84] Images are preloaded, skipping loading
	I1101 00:46:00.081490   44050 ssh_runner.go:195] Run: crio config
	I1101 00:46:00.159192   44050 cni.go:84] Creating CNI manager for ""
	I1101 00:46:00.159222   44050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 00:46:00.159243   44050 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 00:46:00.159268   44050 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.166 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-582989 NodeName:pause-582989 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.166"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.166 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 00:46:00.159429   44050 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.166
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-582989"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.166
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.166"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 00:46:00.159538   44050 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=pause-582989 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.166
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:pause-582989 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 00:46:00.159625   44050 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 00:46:00.170620   44050 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 00:46:00.170715   44050 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 00:46:00.180616   44050 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1101 00:46:00.199065   44050 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 00:46:00.216665   44050 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I1101 00:46:00.233251   44050 ssh_runner.go:195] Run: grep 192.168.83.166	control-plane.minikube.internal$ /etc/hosts
	I1101 00:46:00.237214   44050 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/pause-582989 for IP: 192.168.83.166
	I1101 00:46:00.237265   44050 certs.go:190] acquiring lock for shared ca certs: {Name:mk6185b3938c4b51e80f57b7c81926c81b632b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:46:00.237412   44050 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key
	I1101 00:46:00.237459   44050 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key
	I1101 00:46:00.237545   44050 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/pause-582989/client.key
	I1101 00:46:00.237610   44050 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/pause-582989/apiserver.key.bb7cef09
	I1101 00:46:00.237655   44050 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/pause-582989/proxy-client.key
	I1101 00:46:00.237753   44050 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem (1338 bytes)
	W1101 00:46:00.237793   44050 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504_empty.pem, impossibly tiny 0 bytes
	I1101 00:46:00.237806   44050 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 00:46:00.237830   44050 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem (1078 bytes)
	I1101 00:46:00.237854   44050 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem (1123 bytes)
	I1101 00:46:00.237875   44050 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem (1675 bytes)
	I1101 00:46:00.237914   44050 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem (1708 bytes)
	I1101 00:46:00.238471   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/pause-582989/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 00:46:00.261957   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/pause-582989/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 00:46:00.287195   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/pause-582989/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 00:46:00.310489   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/pause-582989/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 00:46:00.333993   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 00:46:00.358502   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 00:46:00.384471   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 00:46:00.408632   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 00:46:00.433073   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem --> /usr/share/ca-certificates/14504.pem (1338 bytes)
	I1101 00:46:00.456266   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /usr/share/ca-certificates/145042.pem (1708 bytes)
	I1101 00:46:00.480370   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 00:46:00.509229   44050 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 00:46:00.529051   44050 ssh_runner.go:195] Run: openssl version
	I1101 00:46:00.536119   44050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14504.pem && ln -fs /usr/share/ca-certificates/14504.pem /etc/ssl/certs/14504.pem"
	I1101 00:46:00.549020   44050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14504.pem
	I1101 00:46:00.554446   44050 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:54 /usr/share/ca-certificates/14504.pem
	I1101 00:46:00.554538   44050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14504.pem
	I1101 00:46:00.560535   44050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14504.pem /etc/ssl/certs/51391683.0"
	I1101 00:46:00.570908   44050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145042.pem && ln -fs /usr/share/ca-certificates/145042.pem /etc/ssl/certs/145042.pem"
	I1101 00:46:00.581251   44050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145042.pem
	I1101 00:46:00.585650   44050 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:54 /usr/share/ca-certificates/145042.pem
	I1101 00:46:00.585727   44050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145042.pem
	I1101 00:46:00.591189   44050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145042.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 00:46:00.603293   44050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 00:46:00.617172   44050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:46:00.623315   44050 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:46:00.623376   44050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:46:00.630382   44050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 00:46:00.640163   44050 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 00:46:00.644574   44050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 00:46:00.651077   44050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 00:46:00.658672   44050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 00:46:00.666213   44050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 00:46:00.673810   44050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 00:46:00.681113   44050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 00:46:00.687062   44050 kubeadm.go:404] StartCluster: {Name:pause-582989 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.3 ClusterName:pause-582989 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.166 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gp
u-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:46:00.687198   44050 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 00:46:00.687255   44050 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 00:46:00.729114   44050 cri.go:89] found id: "e6064425f28dda71209a4dd39d96349d4c310d45fc4827fb223f3e68b9298be6"
	I1101 00:46:00.729136   44050 cri.go:89] found id: "5ef0c9df2cfa1eddf797e9cf626df84a44695ddca78bc29dbaca6cc572a2bd1f"
	I1101 00:46:00.729140   44050 cri.go:89] found id: "83ba5cf1f9ad384bdf7e669ae56276f532bedc64d0f61ac07a53d1079a0c29e3"
	I1101 00:46:00.729145   44050 cri.go:89] found id: "1201aea9235f1fdc9c9b623c75a375f4cc07d43a0bacac462dbb5ab5d01dded9"
	I1101 00:46:00.729148   44050 cri.go:89] found id: "cab178bec38f9a3c5e477d390c044d5d08cf63740f84beb5df8499a8074bad2b"
	I1101 00:46:00.729154   44050 cri.go:89] found id: "aa2851400cfbff707064071609e4b6e34a1316788b5409e084e8d29882ab2e45"
	I1101 00:46:00.729158   44050 cri.go:89] found id: ""
	I1101 00:46:00.729199   44050 ssh_runner.go:195] Run: sudo runc list -f json
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-11-01 00:44:58 UTC, ends at Wed 2023-11-01 00:46:24 UTC. --
	Nov 01 00:46:24 pause-582989 crio[2357]: time="2023-11-01 00:46:24.682669367Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698799584682654488,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=c97c9452-946e-46b9-85d4-af75c41deec4 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 00:46:24 pause-582989 crio[2357]: time="2023-11-01 00:46:24.683614620Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7f7035f3-9c9c-4a48-9932-2c9645ee7fc7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:46:24 pause-582989 crio[2357]: time="2023-11-01 00:46:24.683685537Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7f7035f3-9c9c-4a48-9932-2c9645ee7fc7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:46:24 pause-582989 crio[2357]: time="2023-11-01 00:46:24.684105212Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cda943c34c9fa9fa8ba74e1ba2e3586a69606f53cb45fd92e2fd4954a82e6677,PodSandboxId:8fd2822663467c1a5ed22c5a835b36c705604ab0ebd5da8121e6a198edafa582,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698799565070871120,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9kk6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dfdf36a-3ee4-4786-9d57-131962bc4c88,},Annotations:map[string]string{io.kubernetes.container.hash: c1662d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aec0b2ce981a57370f48f466371293b2248977ce9ae2fb149919152c16b9c4e,PodSandboxId:1e88b38c03400de264006beb84181038bc5e4129722c9b8ca90514ebe8d7db17,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698799563190062550,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f45gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4ae73e-212e-4a24-a6d7-25ab15186ca8,},Annotations:map[string]string{io.kubernetes.container.hash: cb76dce7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae6bfaaaa10e98d835321734c5099f843e5be624c18735b22503fe925b97bca9,PodSandboxId:26d8740b647e371de08a4bfd9b19b282d749553586104e2fac88edb86ebd66cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698799562624293861,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58defa582c316c79b3d8f3f2b1f06708,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 1ea39117,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fb41d2e45f09c62cd26743a54a7e781ea89ce7b2a8b5f5b571901aead7930ea,PodSandboxId:61e80363c9c256c8a40ed4ad60191fc287f357fc8754eeaa6479038db9bd5ca7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698799562328064964,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1ae1fffd851eec445a886d4c3ef691,},Annotations:map[string]string{
io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b6d5871e2f8d441311be04de5123e1c64372b69980840e5e3cc24e341444ac5,PodSandboxId:e395e77f128231fdcd5b8c73723f3f4bf20fe80886a3f3c3d55c96fdea355cd7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698799562090540855,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeefc617d942f43c20a82588725d37c1,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 65724505,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8c4a61c641e01e33282f1d1ce144e3abbd34a8d1849b85d43f47b4c3c2db3d,PodSandboxId:f9f2060744a5c3e8fc170304e2f244ae4ba70d4f9ad82d2aa8deffed85f3e3e1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698799561737526432,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2800027ab3fa7a2334199a818fc36bcd,},Annotations:map[string]str
ing{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6064425f28dda71209a4dd39d96349d4c310d45fc4827fb223f3e68b9298be6,PodSandboxId:0b85b13b5614c0350341777667605ed6b87a309555353ccb8e1e825f24a0cb59,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1698799546458580184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f45gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4ae73e-212e-4a24-a6d7-25ab15186ca8,},Annotations:map[string]string{io.kubernetes.container.hash: cb76dce7,io.kubernetes.container.p
orts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ef0c9df2cfa1eddf797e9cf626df84a44695ddca78bc29dbaca6cc572a2bd1f,PodSandboxId:9904570ba6b30d88b1dc8955c5f2bd1d2aa82a05acf0398236de5990b37999e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,State:CONTAINER_EXITED,CreatedAt:1698799545480043911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9kk6,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 2dfdf36a-3ee4-4786-9d57-131962bc4c88,},Annotations:map[string]string{io.kubernetes.container.hash: c1662d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83ba5cf1f9ad384bdf7e669ae56276f532bedc64d0f61ac07a53d1079a0c29e3,PodSandboxId:e41e7c6158c0fb0712f236f5e313b1190f72fa40ca1c4c17d54be87a2e414e2e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,State:CONTAINER_EXITED,CreatedAt:1698799523651480577,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeefc617d942f43c20a82588725d37
c1,},Annotations:map[string]string{io.kubernetes.container.hash: 65724505,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1201aea9235f1fdc9c9b623c75a375f4cc07d43a0bacac462dbb5ab5d01dded9,PodSandboxId:fe9d4e12a312ed47deb9004a29184ca768e874a96e4d2f79f5cc59b33dfa38ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,State:CONTAINER_EXITED,CreatedAt:1698799523480346762,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1ae1fffd851eec445a886d4c3ef691,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cab178bec38f9a3c5e477d390c044d5d08cf63740f84beb5df8499a8074bad2b,PodSandboxId:31d2fa936ea88dd0410c22fb1b59918af1e9f1d615bdaf51db01999c5f006a81,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1698799523427456941,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2800027ab3fa7a2334199a818fc36bcd,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa2851400cfbff707064071609e4b6e34a1316788b5409e084e8d29882ab2e45,PodSandboxId:1dfcae9ac6b7021a6f1121e6e3634b5eba673ead248fe685ca39029e7c408eb2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1698799523245479301,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58defa582c316c79b3d8f3f2b1f06708,},Annotations:map[string]string{io.kubernetes.container.hash: 1ea39117,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7f7035f3-9c9c-4a48-9932-2c9645ee7fc7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:46:24 pause-582989 crio[2357]: time="2023-11-01 00:46:24.731671800Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ade29e6f-381e-44d9-952e-bccfd769e467 name=/runtime.v1.RuntimeService/Version
	Nov 01 00:46:24 pause-582989 crio[2357]: time="2023-11-01 00:46:24.731727697Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ade29e6f-381e-44d9-952e-bccfd769e467 name=/runtime.v1.RuntimeService/Version
	Nov 01 00:46:24 pause-582989 crio[2357]: time="2023-11-01 00:46:24.733078584Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ed22804c-f2b5-4149-a9b9-cc494d2cb6ce name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 00:46:24 pause-582989 crio[2357]: time="2023-11-01 00:46:24.733427078Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698799584733415639,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=ed22804c-f2b5-4149-a9b9-cc494d2cb6ce name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 00:46:24 pause-582989 crio[2357]: time="2023-11-01 00:46:24.734653181Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e954e622-523e-4131-9c0c-ea61de0bdd8b name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:46:24 pause-582989 crio[2357]: time="2023-11-01 00:46:24.734732861Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e954e622-523e-4131-9c0c-ea61de0bdd8b name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:46:24 pause-582989 crio[2357]: time="2023-11-01 00:46:24.735057792Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cda943c34c9fa9fa8ba74e1ba2e3586a69606f53cb45fd92e2fd4954a82e6677,PodSandboxId:8fd2822663467c1a5ed22c5a835b36c705604ab0ebd5da8121e6a198edafa582,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698799565070871120,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9kk6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dfdf36a-3ee4-4786-9d57-131962bc4c88,},Annotations:map[string]string{io.kubernetes.container.hash: c1662d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aec0b2ce981a57370f48f466371293b2248977ce9ae2fb149919152c16b9c4e,PodSandboxId:1e88b38c03400de264006beb84181038bc5e4129722c9b8ca90514ebe8d7db17,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698799563190062550,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f45gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4ae73e-212e-4a24-a6d7-25ab15186ca8,},Annotations:map[string]string{io.kubernetes.container.hash: cb76dce7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae6bfaaaa10e98d835321734c5099f843e5be624c18735b22503fe925b97bca9,PodSandboxId:26d8740b647e371de08a4bfd9b19b282d749553586104e2fac88edb86ebd66cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698799562624293861,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58defa582c316c79b3d8f3f2b1f06708,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 1ea39117,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fb41d2e45f09c62cd26743a54a7e781ea89ce7b2a8b5f5b571901aead7930ea,PodSandboxId:61e80363c9c256c8a40ed4ad60191fc287f357fc8754eeaa6479038db9bd5ca7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698799562328064964,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1ae1fffd851eec445a886d4c3ef691,},Annotations:map[string]string{
io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b6d5871e2f8d441311be04de5123e1c64372b69980840e5e3cc24e341444ac5,PodSandboxId:e395e77f128231fdcd5b8c73723f3f4bf20fe80886a3f3c3d55c96fdea355cd7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698799562090540855,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeefc617d942f43c20a82588725d37c1,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 65724505,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8c4a61c641e01e33282f1d1ce144e3abbd34a8d1849b85d43f47b4c3c2db3d,PodSandboxId:f9f2060744a5c3e8fc170304e2f244ae4ba70d4f9ad82d2aa8deffed85f3e3e1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698799561737526432,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2800027ab3fa7a2334199a818fc36bcd,},Annotations:map[string]str
ing{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6064425f28dda71209a4dd39d96349d4c310d45fc4827fb223f3e68b9298be6,PodSandboxId:0b85b13b5614c0350341777667605ed6b87a309555353ccb8e1e825f24a0cb59,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1698799546458580184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f45gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4ae73e-212e-4a24-a6d7-25ab15186ca8,},Annotations:map[string]string{io.kubernetes.container.hash: cb76dce7,io.kubernetes.container.p
orts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ef0c9df2cfa1eddf797e9cf626df84a44695ddca78bc29dbaca6cc572a2bd1f,PodSandboxId:9904570ba6b30d88b1dc8955c5f2bd1d2aa82a05acf0398236de5990b37999e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,State:CONTAINER_EXITED,CreatedAt:1698799545480043911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9kk6,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 2dfdf36a-3ee4-4786-9d57-131962bc4c88,},Annotations:map[string]string{io.kubernetes.container.hash: c1662d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83ba5cf1f9ad384bdf7e669ae56276f532bedc64d0f61ac07a53d1079a0c29e3,PodSandboxId:e41e7c6158c0fb0712f236f5e313b1190f72fa40ca1c4c17d54be87a2e414e2e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,State:CONTAINER_EXITED,CreatedAt:1698799523651480577,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeefc617d942f43c20a82588725d37
c1,},Annotations:map[string]string{io.kubernetes.container.hash: 65724505,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1201aea9235f1fdc9c9b623c75a375f4cc07d43a0bacac462dbb5ab5d01dded9,PodSandboxId:fe9d4e12a312ed47deb9004a29184ca768e874a96e4d2f79f5cc59b33dfa38ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,State:CONTAINER_EXITED,CreatedAt:1698799523480346762,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1ae1fffd851eec445a886d4c3ef691,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cab178bec38f9a3c5e477d390c044d5d08cf63740f84beb5df8499a8074bad2b,PodSandboxId:31d2fa936ea88dd0410c22fb1b59918af1e9f1d615bdaf51db01999c5f006a81,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1698799523427456941,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2800027ab3fa7a2334199a818fc36bcd,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa2851400cfbff707064071609e4b6e34a1316788b5409e084e8d29882ab2e45,PodSandboxId:1dfcae9ac6b7021a6f1121e6e3634b5eba673ead248fe685ca39029e7c408eb2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1698799523245479301,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58defa582c316c79b3d8f3f2b1f06708,},Annotations:map[string]string{io.kubernetes.container.hash: 1ea39117,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e954e622-523e-4131-9c0c-ea61de0bdd8b name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:46:24 pause-582989 crio[2357]: time="2023-11-01 00:46:24.778004547Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=43dbc0dc-4c46-4fea-9307-ac28677b116e name=/runtime.v1.RuntimeService/Version
	Nov 01 00:46:24 pause-582989 crio[2357]: time="2023-11-01 00:46:24.778061623Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=43dbc0dc-4c46-4fea-9307-ac28677b116e name=/runtime.v1.RuntimeService/Version
	Nov 01 00:46:24 pause-582989 crio[2357]: time="2023-11-01 00:46:24.779468057Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7250cd21-2173-46d7-a02b-ebb5b6b4cc1a name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 00:46:24 pause-582989 crio[2357]: time="2023-11-01 00:46:24.779928165Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698799584779907086,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=7250cd21-2173-46d7-a02b-ebb5b6b4cc1a name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 00:46:24 pause-582989 crio[2357]: time="2023-11-01 00:46:24.780571462Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1665c520-4ea1-4492-8309-9c735fb96793 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:46:24 pause-582989 crio[2357]: time="2023-11-01 00:46:24.780617718Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1665c520-4ea1-4492-8309-9c735fb96793 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:46:24 pause-582989 crio[2357]: time="2023-11-01 00:46:24.780925371Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cda943c34c9fa9fa8ba74e1ba2e3586a69606f53cb45fd92e2fd4954a82e6677,PodSandboxId:8fd2822663467c1a5ed22c5a835b36c705604ab0ebd5da8121e6a198edafa582,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698799565070871120,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9kk6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dfdf36a-3ee4-4786-9d57-131962bc4c88,},Annotations:map[string]string{io.kubernetes.container.hash: c1662d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aec0b2ce981a57370f48f466371293b2248977ce9ae2fb149919152c16b9c4e,PodSandboxId:1e88b38c03400de264006beb84181038bc5e4129722c9b8ca90514ebe8d7db17,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698799563190062550,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f45gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4ae73e-212e-4a24-a6d7-25ab15186ca8,},Annotations:map[string]string{io.kubernetes.container.hash: cb76dce7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae6bfaaaa10e98d835321734c5099f843e5be624c18735b22503fe925b97bca9,PodSandboxId:26d8740b647e371de08a4bfd9b19b282d749553586104e2fac88edb86ebd66cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698799562624293861,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58defa582c316c79b3d8f3f2b1f06708,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 1ea39117,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fb41d2e45f09c62cd26743a54a7e781ea89ce7b2a8b5f5b571901aead7930ea,PodSandboxId:61e80363c9c256c8a40ed4ad60191fc287f357fc8754eeaa6479038db9bd5ca7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698799562328064964,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1ae1fffd851eec445a886d4c3ef691,},Annotations:map[string]string{
io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b6d5871e2f8d441311be04de5123e1c64372b69980840e5e3cc24e341444ac5,PodSandboxId:e395e77f128231fdcd5b8c73723f3f4bf20fe80886a3f3c3d55c96fdea355cd7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698799562090540855,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeefc617d942f43c20a82588725d37c1,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 65724505,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8c4a61c641e01e33282f1d1ce144e3abbd34a8d1849b85d43f47b4c3c2db3d,PodSandboxId:f9f2060744a5c3e8fc170304e2f244ae4ba70d4f9ad82d2aa8deffed85f3e3e1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698799561737526432,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2800027ab3fa7a2334199a818fc36bcd,},Annotations:map[string]str
ing{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6064425f28dda71209a4dd39d96349d4c310d45fc4827fb223f3e68b9298be6,PodSandboxId:0b85b13b5614c0350341777667605ed6b87a309555353ccb8e1e825f24a0cb59,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1698799546458580184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f45gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4ae73e-212e-4a24-a6d7-25ab15186ca8,},Annotations:map[string]string{io.kubernetes.container.hash: cb76dce7,io.kubernetes.container.p
orts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ef0c9df2cfa1eddf797e9cf626df84a44695ddca78bc29dbaca6cc572a2bd1f,PodSandboxId:9904570ba6b30d88b1dc8955c5f2bd1d2aa82a05acf0398236de5990b37999e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,State:CONTAINER_EXITED,CreatedAt:1698799545480043911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9kk6,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 2dfdf36a-3ee4-4786-9d57-131962bc4c88,},Annotations:map[string]string{io.kubernetes.container.hash: c1662d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83ba5cf1f9ad384bdf7e669ae56276f532bedc64d0f61ac07a53d1079a0c29e3,PodSandboxId:e41e7c6158c0fb0712f236f5e313b1190f72fa40ca1c4c17d54be87a2e414e2e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,State:CONTAINER_EXITED,CreatedAt:1698799523651480577,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeefc617d942f43c20a82588725d37
c1,},Annotations:map[string]string{io.kubernetes.container.hash: 65724505,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1201aea9235f1fdc9c9b623c75a375f4cc07d43a0bacac462dbb5ab5d01dded9,PodSandboxId:fe9d4e12a312ed47deb9004a29184ca768e874a96e4d2f79f5cc59b33dfa38ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,State:CONTAINER_EXITED,CreatedAt:1698799523480346762,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1ae1fffd851eec445a886d4c3ef691,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cab178bec38f9a3c5e477d390c044d5d08cf63740f84beb5df8499a8074bad2b,PodSandboxId:31d2fa936ea88dd0410c22fb1b59918af1e9f1d615bdaf51db01999c5f006a81,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1698799523427456941,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2800027ab3fa7a2334199a818fc36bcd,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa2851400cfbff707064071609e4b6e34a1316788b5409e084e8d29882ab2e45,PodSandboxId:1dfcae9ac6b7021a6f1121e6e3634b5eba673ead248fe685ca39029e7c408eb2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1698799523245479301,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58defa582c316c79b3d8f3f2b1f06708,},Annotations:map[string]string{io.kubernetes.container.hash: 1ea39117,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1665c520-4ea1-4492-8309-9c735fb96793 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:46:24 pause-582989 crio[2357]: time="2023-11-01 00:46:24.826525770Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=26518a23-0192-4cc9-b86a-64ab9a7705e7 name=/runtime.v1.RuntimeService/Version
	Nov 01 00:46:24 pause-582989 crio[2357]: time="2023-11-01 00:46:24.826584277Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=26518a23-0192-4cc9-b86a-64ab9a7705e7 name=/runtime.v1.RuntimeService/Version
	Nov 01 00:46:24 pause-582989 crio[2357]: time="2023-11-01 00:46:24.829305074Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=50446959-ba1c-47b0-ad6d-0eeb686f28a3 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 00:46:24 pause-582989 crio[2357]: time="2023-11-01 00:46:24.829716168Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698799584829703218,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=50446959-ba1c-47b0-ad6d-0eeb686f28a3 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 00:46:24 pause-582989 crio[2357]: time="2023-11-01 00:46:24.830370834Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2dfbbb87-2129-4665-aa3c-3a01141ccd36 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:46:24 pause-582989 crio[2357]: time="2023-11-01 00:46:24.830455337Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2dfbbb87-2129-4665-aa3c-3a01141ccd36 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:46:24 pause-582989 crio[2357]: time="2023-11-01 00:46:24.830685617Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cda943c34c9fa9fa8ba74e1ba2e3586a69606f53cb45fd92e2fd4954a82e6677,PodSandboxId:8fd2822663467c1a5ed22c5a835b36c705604ab0ebd5da8121e6a198edafa582,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698799565070871120,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9kk6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dfdf36a-3ee4-4786-9d57-131962bc4c88,},Annotations:map[string]string{io.kubernetes.container.hash: c1662d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aec0b2ce981a57370f48f466371293b2248977ce9ae2fb149919152c16b9c4e,PodSandboxId:1e88b38c03400de264006beb84181038bc5e4129722c9b8ca90514ebe8d7db17,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698799563190062550,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f45gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4ae73e-212e-4a24-a6d7-25ab15186ca8,},Annotations:map[string]string{io.kubernetes.container.hash: cb76dce7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae6bfaaaa10e98d835321734c5099f843e5be624c18735b22503fe925b97bca9,PodSandboxId:26d8740b647e371de08a4bfd9b19b282d749553586104e2fac88edb86ebd66cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698799562624293861,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58defa582c316c79b3d8f3f2b1f06708,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 1ea39117,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fb41d2e45f09c62cd26743a54a7e781ea89ce7b2a8b5f5b571901aead7930ea,PodSandboxId:61e80363c9c256c8a40ed4ad60191fc287f357fc8754eeaa6479038db9bd5ca7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698799562328064964,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1ae1fffd851eec445a886d4c3ef691,},Annotations:map[string]string{
io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b6d5871e2f8d441311be04de5123e1c64372b69980840e5e3cc24e341444ac5,PodSandboxId:e395e77f128231fdcd5b8c73723f3f4bf20fe80886a3f3c3d55c96fdea355cd7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698799562090540855,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeefc617d942f43c20a82588725d37c1,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 65724505,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8c4a61c641e01e33282f1d1ce144e3abbd34a8d1849b85d43f47b4c3c2db3d,PodSandboxId:f9f2060744a5c3e8fc170304e2f244ae4ba70d4f9ad82d2aa8deffed85f3e3e1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698799561737526432,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2800027ab3fa7a2334199a818fc36bcd,},Annotations:map[string]str
ing{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6064425f28dda71209a4dd39d96349d4c310d45fc4827fb223f3e68b9298be6,PodSandboxId:0b85b13b5614c0350341777667605ed6b87a309555353ccb8e1e825f24a0cb59,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1698799546458580184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f45gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4ae73e-212e-4a24-a6d7-25ab15186ca8,},Annotations:map[string]string{io.kubernetes.container.hash: cb76dce7,io.kubernetes.container.p
orts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ef0c9df2cfa1eddf797e9cf626df84a44695ddca78bc29dbaca6cc572a2bd1f,PodSandboxId:9904570ba6b30d88b1dc8955c5f2bd1d2aa82a05acf0398236de5990b37999e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,State:CONTAINER_EXITED,CreatedAt:1698799545480043911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9kk6,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 2dfdf36a-3ee4-4786-9d57-131962bc4c88,},Annotations:map[string]string{io.kubernetes.container.hash: c1662d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83ba5cf1f9ad384bdf7e669ae56276f532bedc64d0f61ac07a53d1079a0c29e3,PodSandboxId:e41e7c6158c0fb0712f236f5e313b1190f72fa40ca1c4c17d54be87a2e414e2e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,State:CONTAINER_EXITED,CreatedAt:1698799523651480577,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeefc617d942f43c20a82588725d37
c1,},Annotations:map[string]string{io.kubernetes.container.hash: 65724505,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1201aea9235f1fdc9c9b623c75a375f4cc07d43a0bacac462dbb5ab5d01dded9,PodSandboxId:fe9d4e12a312ed47deb9004a29184ca768e874a96e4d2f79f5cc59b33dfa38ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,State:CONTAINER_EXITED,CreatedAt:1698799523480346762,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1ae1fffd851eec445a886d4c3ef691,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cab178bec38f9a3c5e477d390c044d5d08cf63740f84beb5df8499a8074bad2b,PodSandboxId:31d2fa936ea88dd0410c22fb1b59918af1e9f1d615bdaf51db01999c5f006a81,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1698799523427456941,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2800027ab3fa7a2334199a818fc36bcd,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa2851400cfbff707064071609e4b6e34a1316788b5409e084e8d29882ab2e45,PodSandboxId:1dfcae9ac6b7021a6f1121e6e3634b5eba673ead248fe685ca39029e7c408eb2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1698799523245479301,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58defa582c316c79b3d8f3f2b1f06708,},Annotations:map[string]string{io.kubernetes.container.hash: 1ea39117,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2dfbbb87-2129-4665-aa3c-3a01141ccd36 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	cda943c34c9fa       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   19 seconds ago       Running             kube-proxy                1                   8fd2822663467       kube-proxy-l9kk6
	4aec0b2ce981a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   21 seconds ago       Running             coredns                   1                   1e88b38c03400       coredns-5dd5756b68-f45gz
	ae6bfaaaa10e9       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   22 seconds ago       Running             etcd                      1                   26d8740b647e3       etcd-pause-582989
	6fb41d2e45f09       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   22 seconds ago       Running             kube-scheduler            1                   61e80363c9c25       kube-scheduler-pause-582989
	3b6d5871e2f8d       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   22 seconds ago       Running             kube-apiserver            1                   e395e77f12823       kube-apiserver-pause-582989
	5e8c4a61c641e       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   23 seconds ago       Running             kube-controller-manager   1                   f9f2060744a5c       kube-controller-manager-pause-582989
	e6064425f28dd       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   38 seconds ago       Exited              coredns                   0                   0b85b13b5614c       coredns-5dd5756b68-f45gz
	5ef0c9df2cfa1       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   39 seconds ago       Exited              kube-proxy                0                   9904570ba6b30       kube-proxy-l9kk6
	83ba5cf1f9ad3       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   About a minute ago   Exited              kube-apiserver            0                   e41e7c6158c0f       kube-apiserver-pause-582989
	1201aea9235f1       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   About a minute ago   Exited              kube-scheduler            0                   fe9d4e12a312e       kube-scheduler-pause-582989
	cab178bec38f9       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   About a minute ago   Exited              kube-controller-manager   0                   31d2fa936ea88       kube-controller-manager-pause-582989
	aa2851400cfbf       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   About a minute ago   Exited              etcd                      0                   1dfcae9ac6b70       etcd-pause-582989
	
	* 
	* ==> coredns [4aec0b2ce981a57370f48f466371293b2248977ce9ae2fb149919152c16b9c4e] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 347fb4f25cc546215231b2e9ef34a7838489408c50ad1d77e38b06de967dd388dc540a0db2692259640c7998323f3763426b7a7e73fad2aa89cebddf27cf7c94
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:56447 - 13752 "HINFO IN 826386391169473441.4398837603073810538. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.00915945s
	
	* 
	* ==> coredns [e6064425f28dda71209a4dd39d96349d4c310d45fc4827fb223f3e68b9298be6] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 347fb4f25cc546215231b2e9ef34a7838489408c50ad1d77e38b06de967dd388dc540a0db2692259640c7998323f3763426b7a7e73fad2aa89cebddf27cf7c94
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60087 - 48228 "HINFO IN 6161423461098319785.3218909952744732747. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017768473s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-582989
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-582989
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9
	                    minikube.k8s.io/name=pause-582989
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_01T00_45_31_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Nov 2023 00:45:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-582989
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Nov 2023 00:46:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Nov 2023 00:46:12 +0000   Wed, 01 Nov 2023 00:45:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Nov 2023 00:46:12 +0000   Wed, 01 Nov 2023 00:45:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Nov 2023 00:46:12 +0000   Wed, 01 Nov 2023 00:45:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Nov 2023 00:46:12 +0000   Wed, 01 Nov 2023 00:45:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.166
	  Hostname:    pause-582989
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 6bc7a7e21aa14f7798ac5a787d112281
	  System UUID:                6bc7a7e2-1aa1-4f77-98ac-5a787d112281
	  Boot ID:                    a2e2fd9a-cba3-46f5-b2db-5d98a9cb887a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-f45gz                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     41s
	  kube-system                 etcd-pause-582989                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         54s
	  kube-system                 kube-apiserver-pause-582989             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	  kube-system                 kube-controller-manager-pause-582989    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	  kube-system                 kube-proxy-l9kk6                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  kube-system                 kube-scheduler-pause-582989             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 39s                kube-proxy       
	  Normal  Starting                 16s                kube-proxy       
	  Normal  NodeAllocatableEnforced  63s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  63s (x8 over 63s)  kubelet          Node pause-582989 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s (x8 over 63s)  kubelet          Node pause-582989 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s (x7 over 63s)  kubelet          Node pause-582989 status is now: NodeHasSufficientPID
	  Normal  Starting                 63s                kubelet          Starting kubelet.
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s                kubelet          Node pause-582989 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s                kubelet          Node pause-582989 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s                kubelet          Node pause-582989 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  54s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                54s                kubelet          Node pause-582989 status is now: NodeReady
	  Normal  RegisteredNode           43s                node-controller  Node pause-582989 event: Registered Node pause-582989 in Controller
	  Normal  RegisteredNode           5s                 node-controller  Node pause-582989 event: Registered Node pause-582989 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov 1 00:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071749] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.700999] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.216540] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.158791] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Nov 1 00:45] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000004] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.107093] systemd-fstab-generator[644]: Ignoring "noauto" for root device
	[  +0.117025] systemd-fstab-generator[655]: Ignoring "noauto" for root device
	[  +0.151792] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.119944] systemd-fstab-generator[679]: Ignoring "noauto" for root device
	[  +0.273706] systemd-fstab-generator[703]: Ignoring "noauto" for root device
	[  +9.774989] systemd-fstab-generator[927]: Ignoring "noauto" for root device
	[  +9.808715] systemd-fstab-generator[1260]: Ignoring "noauto" for root device
	[ +25.952781] systemd-fstab-generator[2050]: Ignoring "noauto" for root device
	[  +0.156903] systemd-fstab-generator[2061]: Ignoring "noauto" for root device
	[  +0.173148] systemd-fstab-generator[2074]: Ignoring "noauto" for root device
	[  +0.128713] systemd-fstab-generator[2085]: Ignoring "noauto" for root device
	[  +0.303134] kauditd_printk_skb: 23 callbacks suppressed
	[  +0.344554] systemd-fstab-generator[2245]: Ignoring "noauto" for root device
	[Nov 1 00:46] kauditd_printk_skb: 8 callbacks suppressed
	
	* 
	* ==> etcd [aa2851400cfbff707064071609e4b6e34a1316788b5409e084e8d29882ab2e45] <==
	* {"level":"warn","ts":"2023-11-01T00:45:44.331207Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-01T00:45:43.928649Z","time spent":"402.533179ms","remote":"127.0.0.1:48898","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":164,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/serviceaccounts/kube-public/default\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-public/default\" value_size:112 >> failure:<>"}
	{"level":"info","ts":"2023-11-01T00:45:44.331368Z","caller":"traceutil/trace.go:171","msg":"trace[1765063127] transaction","detail":"{read_only:false; response_revision:347; number_of_response:1; }","duration":"402.604837ms","start":"2023-11-01T00:45:43.928753Z","end":"2023-11-01T00:45:44.331357Z","steps":["trace[1765063127] 'process raft request'  (duration: 401.507387ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-01T00:45:44.331405Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-01T00:45:43.928748Z","time spent":"402.638395ms","remote":"127.0.0.1:48962","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3620,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-5dd5756b68\" mod_revision:0 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-5dd5756b68\" value_size:3560 >> failure:<>"}
	{"level":"info","ts":"2023-11-01T00:45:44.331548Z","caller":"traceutil/trace.go:171","msg":"trace[1379998469] transaction","detail":"{read_only:false; response_revision:348; number_of_response:1; }","duration":"400.164616ms","start":"2023-11-01T00:45:43.931373Z","end":"2023-11-01T00:45:44.331538Z","steps":["trace[1379998469] 'process raft request'  (duration: 398.919195ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-01T00:45:44.331593Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-01T00:45:43.931294Z","time spent":"400.274907ms","remote":"127.0.0.1:48964","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2124,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/controllerrevisions/kube-system/kube-proxy-dffc744c9\" mod_revision:0 > success:<request_put:<key:\"/registry/controllerrevisions/kube-system/kube-proxy-dffc744c9\" value_size:2054 >> failure:<>"}
	{"level":"warn","ts":"2023-11-01T00:45:44.33173Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.965344ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:612"}
	{"level":"info","ts":"2023-11-01T00:45:44.331758Z","caller":"traceutil/trace.go:171","msg":"trace[57907702] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:348; }","duration":"189.996086ms","start":"2023-11-01T00:45:44.141753Z","end":"2023-11-01T00:45:44.33175Z","steps":["trace[57907702] 'agreement among raft nodes before linearized reading'  (duration: 189.925955ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-01T00:45:44.331964Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"289.26651ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:3193"}
	{"level":"info","ts":"2023-11-01T00:45:44.332019Z","caller":"traceutil/trace.go:171","msg":"trace[753113455] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:348; }","duration":"289.323431ms","start":"2023-11-01T00:45:44.042687Z","end":"2023-11-01T00:45:44.33201Z","steps":["trace[753113455] 'agreement among raft nodes before linearized reading'  (duration: 289.236943ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-01T00:45:44.33458Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"405.711096ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-01T00:45:44.334652Z","caller":"traceutil/trace.go:171","msg":"trace[279390574] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:348; }","duration":"405.788775ms","start":"2023-11-01T00:45:43.928851Z","end":"2023-11-01T00:45:44.334639Z","steps":["trace[279390574] 'agreement among raft nodes before linearized reading'  (duration: 405.675299ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-01T00:45:44.334696Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-01T00:45:43.928775Z","time spent":"405.902438ms","remote":"127.0.0.1:48852","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2023-11-01T00:45:44.329718Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"413.243757ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/edit\" ","response":"range_response_count:1 size:2205"}
	{"level":"info","ts":"2023-11-01T00:45:44.335119Z","caller":"traceutil/trace.go:171","msg":"trace[1534831500] range","detail":"{range_begin:/registry/clusterroles/edit; range_end:; response_count:1; response_revision:344; }","duration":"418.646734ms","start":"2023-11-01T00:45:43.916457Z","end":"2023-11-01T00:45:44.335104Z","steps":["trace[1534831500] 'agreement among raft nodes before linearized reading'  (duration: 413.12707ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-01T00:45:44.335159Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-01T00:45:43.916449Z","time spent":"418.697001ms","remote":"127.0.0.1:48930","response type":"/etcdserverpb.KV/Range","request count":0,"request size":29,"response count":1,"response size":2228,"request content":"key:\"/registry/clusterroles/edit\" "}
	{"level":"info","ts":"2023-11-01T00:45:50.83517Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-11-01T00:45:50.835262Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-582989","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.166:2380"],"advertise-client-urls":["https://192.168.83.166:2379"]}
	{"level":"warn","ts":"2023-11-01T00:45:50.835435Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-11-01T00:45:50.835586Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-11-01T00:45:50.887912Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.83.166:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-11-01T00:45:50.888011Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.83.166:2379: use of closed network connection"}
	{"level":"info","ts":"2023-11-01T00:45:50.888113Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"14b81da0c68bfbd7","current-leader-member-id":"14b81da0c68bfbd7"}
	{"level":"info","ts":"2023-11-01T00:45:50.894405Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.83.166:2380"}
	{"level":"info","ts":"2023-11-01T00:45:50.894567Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.83.166:2380"}
	{"level":"info","ts":"2023-11-01T00:45:50.894612Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-582989","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.166:2380"],"advertise-client-urls":["https://192.168.83.166:2379"]}
	
	* 
	* ==> etcd [ae6bfaaaa10e98d835321734c5099f843e5be624c18735b22503fe925b97bca9] <==
	* {"level":"info","ts":"2023-11-01T00:46:04.739248Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-01T00:46:04.739276Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-01T00:46:04.739557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"14b81da0c68bfbd7 switched to configuration voters=(1492975852836355031)"}
	{"level":"info","ts":"2023-11-01T00:46:04.739656Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a30fd0400b31c5f5","local-member-id":"14b81da0c68bfbd7","added-peer-id":"14b81da0c68bfbd7","added-peer-peer-urls":["https://192.168.83.166:2380"]}
	{"level":"info","ts":"2023-11-01T00:46:04.739865Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a30fd0400b31c5f5","local-member-id":"14b81da0c68bfbd7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T00:46:04.739924Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T00:46:04.741671Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-01T00:46:04.741993Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"14b81da0c68bfbd7","initial-advertise-peer-urls":["https://192.168.83.166:2380"],"listen-peer-urls":["https://192.168.83.166:2380"],"advertise-client-urls":["https://192.168.83.166:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.83.166:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-01T00:46:04.743191Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.83.166:2380"}
	{"level":"info","ts":"2023-11-01T00:46:04.743615Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.83.166:2380"}
	{"level":"info","ts":"2023-11-01T00:46:04.743545Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-01T00:46:06.404061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"14b81da0c68bfbd7 is starting a new election at term 2"}
	{"level":"info","ts":"2023-11-01T00:46:06.404204Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"14b81da0c68bfbd7 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-11-01T00:46:06.404285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"14b81da0c68bfbd7 received MsgPreVoteResp from 14b81da0c68bfbd7 at term 2"}
	{"level":"info","ts":"2023-11-01T00:46:06.404367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"14b81da0c68bfbd7 became candidate at term 3"}
	{"level":"info","ts":"2023-11-01T00:46:06.404459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"14b81da0c68bfbd7 received MsgVoteResp from 14b81da0c68bfbd7 at term 3"}
	{"level":"info","ts":"2023-11-01T00:46:06.40449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"14b81da0c68bfbd7 became leader at term 3"}
	{"level":"info","ts":"2023-11-01T00:46:06.404578Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 14b81da0c68bfbd7 elected leader 14b81da0c68bfbd7 at term 3"}
	{"level":"info","ts":"2023-11-01T00:46:06.407862Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"14b81da0c68bfbd7","local-member-attributes":"{Name:pause-582989 ClientURLs:[https://192.168.83.166:2379]}","request-path":"/0/members/14b81da0c68bfbd7/attributes","cluster-id":"a30fd0400b31c5f5","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-01T00:46:06.407922Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-01T00:46:06.408347Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-01T00:46:06.408574Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-01T00:46:06.40871Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-01T00:46:06.408975Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-01T00:46:06.410294Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.166:2379"}
	
	* 
	* ==> kernel <==
	*  00:46:25 up 1 min,  0 users,  load average: 1.26, 0.47, 0.17
	Linux pause-582989 5.10.57 #1 SMP Tue Oct 31 22:14:31 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [3b6d5871e2f8d441311be04de5123e1c64372b69980840e5e3cc24e341444ac5] <==
	* I1101 00:46:07.858570       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1101 00:46:07.858602       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1101 00:46:07.858751       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I1101 00:46:07.858838       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I1101 00:46:07.858955       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I1101 00:46:07.858449       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I1101 00:46:07.899493       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1101 00:46:07.899643       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1101 00:46:07.981165       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 00:46:08.024511       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1101 00:46:08.055846       1 shared_informer.go:318] Caches are synced for configmaps
	I1101 00:46:08.057566       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1101 00:46:08.058137       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 00:46:08.058423       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1101 00:46:08.058466       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1101 00:46:08.059161       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1101 00:46:08.059311       1 aggregator.go:166] initial CRD sync complete...
	I1101 00:46:08.059340       1 autoregister_controller.go:141] Starting autoregister controller
	I1101 00:46:08.059362       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 00:46:08.059384       1 cache.go:39] Caches are synced for autoregister controller
	I1101 00:46:08.059520       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	E1101 00:46:08.110520       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 00:46:08.863558       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 00:46:20.380506       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 00:46:20.442131       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-apiserver [83ba5cf1f9ad384bdf7e669ae56276f532bedc64d0f61ac07a53d1079a0c29e3] <==
	* }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 00:45:50.863976       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 00:45:50.864088       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 00:45:50.868339       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [5e8c4a61c641e01e33282f1d1ce144e3abbd34a8d1849b85d43f47b4c3c2db3d] <==
	* I1101 00:46:20.406288       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1101 00:46:20.406471       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-582989"
	I1101 00:46:20.406601       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1101 00:46:20.406667       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I1101 00:46:20.406734       1 shared_informer.go:318] Caches are synced for namespace
	I1101 00:46:20.407429       1 taint_manager.go:211] "Sending events to api server"
	I1101 00:46:20.407638       1 event.go:307] "Event occurred" object="pause-582989" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-582989 event: Registered Node pause-582989 in Controller"
	I1101 00:46:20.410004       1 shared_informer.go:318] Caches are synced for deployment
	I1101 00:46:20.412483       1 shared_informer.go:318] Caches are synced for cronjob
	I1101 00:46:20.415689       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I1101 00:46:20.416928       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I1101 00:46:20.417282       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I1101 00:46:20.418720       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1101 00:46:20.420997       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I1101 00:46:20.424290       1 shared_informer.go:318] Caches are synced for attach detach
	I1101 00:46:20.428890       1 shared_informer.go:318] Caches are synced for job
	I1101 00:46:20.428983       1 shared_informer.go:318] Caches are synced for endpoint
	I1101 00:46:20.434302       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1101 00:46:20.494467       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 00:46:20.510230       1 shared_informer.go:318] Caches are synced for stateful set
	I1101 00:46:20.538271       1 shared_informer.go:318] Caches are synced for disruption
	I1101 00:46:20.554026       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 00:46:20.902193       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 00:46:20.902310       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1101 00:46:20.956723       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-controller-manager [cab178bec38f9a3c5e477d390c044d5d08cf63740f84beb5df8499a8074bad2b] <==
	* I1101 00:45:42.916239       1 shared_informer.go:318] Caches are synced for crt configmap
	I1101 00:45:42.916624       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1101 00:45:42.982943       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 00:45:42.989887       1 shared_informer.go:318] Caches are synced for disruption
	I1101 00:45:43.019192       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1101 00:45:43.057155       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 00:45:43.443671       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 00:45:43.448298       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 00:45:43.448428       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1101 00:45:44.345668       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1101 00:45:44.375191       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-l9kk6"
	I1101 00:45:44.479035       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-lp248"
	I1101 00:45:44.527872       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-f45gz"
	I1101 00:45:44.556002       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1101 00:45:44.660993       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="325.642748ms"
	I1101 00:45:44.720442       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-lp248"
	I1101 00:45:44.773756       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="112.000484ms"
	I1101 00:45:44.833007       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="59.096925ms"
	I1101 00:45:44.836265       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="104.147µs"
	I1101 00:45:46.666339       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="112.438µs"
	I1101 00:45:46.717271       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="122.886µs"
	I1101 00:45:46.732329       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="60.96µs"
	I1101 00:45:46.735485       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="64.679µs"
	I1101 00:45:47.668666       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.900216ms"
	I1101 00:45:47.670661       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="97.021µs"
	
	* 
	* ==> kube-proxy [5ef0c9df2cfa1eddf797e9cf626df84a44695ddca78bc29dbaca6cc572a2bd1f] <==
	* I1101 00:45:45.734212       1 server_others.go:69] "Using iptables proxy"
	I1101 00:45:45.749281       1 node.go:141] Successfully retrieved node IP: 192.168.83.166
	I1101 00:45:45.797438       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1101 00:45:45.797512       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 00:45:45.800340       1 server_others.go:152] "Using iptables Proxier"
	I1101 00:45:45.800757       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 00:45:45.801191       1 server.go:846] "Version info" version="v1.28.3"
	I1101 00:45:45.801294       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 00:45:45.803318       1 config.go:188] "Starting service config controller"
	I1101 00:45:45.803564       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 00:45:45.803629       1 config.go:97] "Starting endpoint slice config controller"
	I1101 00:45:45.803652       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 00:45:45.804460       1 config.go:315] "Starting node config controller"
	I1101 00:45:45.804507       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 00:45:45.908966       1 shared_informer.go:318] Caches are synced for service config
	I1101 00:45:45.909319       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1101 00:45:45.909686       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [cda943c34c9fa9fa8ba74e1ba2e3586a69606f53cb45fd92e2fd4954a82e6677] <==
	* I1101 00:46:05.259085       1 server_others.go:69] "Using iptables proxy"
	I1101 00:46:08.042506       1 node.go:141] Successfully retrieved node IP: 192.168.83.166
	I1101 00:46:08.197414       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1101 00:46:08.197504       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 00:46:08.206766       1 server_others.go:152] "Using iptables Proxier"
	I1101 00:46:08.207004       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 00:46:08.207351       1 server.go:846] "Version info" version="v1.28.3"
	I1101 00:46:08.207447       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 00:46:08.209241       1 config.go:188] "Starting service config controller"
	I1101 00:46:08.209301       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 00:46:08.209349       1 config.go:97] "Starting endpoint slice config controller"
	I1101 00:46:08.209355       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 00:46:08.219405       1 config.go:315] "Starting node config controller"
	I1101 00:46:08.219497       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 00:46:08.309764       1 shared_informer.go:318] Caches are synced for service config
	I1101 00:46:08.310134       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1101 00:46:08.320447       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [1201aea9235f1fdc9c9b623c75a375f4cc07d43a0bacac462dbb5ab5d01dded9] <==
	* E1101 00:45:28.691171       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 00:45:28.735499       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1101 00:45:28.735535       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1101 00:45:28.868116       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1101 00:45:28.868265       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1101 00:45:28.901106       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1101 00:45:28.901175       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1101 00:45:28.905232       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1101 00:45:28.905291       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1101 00:45:28.912909       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1101 00:45:28.913039       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1101 00:45:28.963391       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1101 00:45:28.963437       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1101 00:45:28.997674       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1101 00:45:28.997750       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1101 00:45:29.022720       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1101 00:45:29.022775       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1101 00:45:29.055869       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1101 00:45:29.055919       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1101 00:45:29.087916       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1101 00:45:29.087969       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1101 00:45:30.928654       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 00:45:50.845948       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1101 00:45:50.846092       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E1101 00:45:50.855458       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [6fb41d2e45f09c62cd26743a54a7e781ea89ce7b2a8b5f5b571901aead7930ea] <==
	* I1101 00:46:05.327163       1 serving.go:348] Generated self-signed cert in-memory
	W1101 00:46:07.929696       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 00:46:07.930258       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 00:46:07.930477       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 00:46:07.930509       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 00:46:07.994079       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1101 00:46:07.994220       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 00:46:08.009376       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1101 00:46:08.012933       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1101 00:46:08.013052       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 00:46:08.013088       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 00:46:08.114225       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-11-01 00:44:58 UTC, ends at Wed 2023-11-01 00:46:25 UTC. --
	Nov 01 00:45:59 pause-582989 kubelet[1267]: E1101 00:45:59.544351    1267 kubelet.go:2473] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 01 00:45:59 pause-582989 kubelet[1267]: E1101 00:45:59.759184    1267 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="nil"
	Nov 01 00:45:59 pause-582989 kubelet[1267]: E1101 00:45:59.759253    1267 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 01 00:45:59 pause-582989 kubelet[1267]: E1101 00:45:59.759269    1267 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 01 00:46:00 pause-582989 kubelet[1267]: I1101 00:46:00.766969    1267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="288ca3a4d5af9189582424e258f73b69ce95709af369e2c94200064225472c12"
	Nov 01 00:46:00 pause-582989 kubelet[1267]: I1101 00:46:00.772106    1267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f1605d92fdab9bab77b7e3535f0de0769df84ef8d5f1f36b7f5248b7fc20523"
	Nov 01 00:46:00 pause-582989 kubelet[1267]: I1101 00:46:00.789050    1267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a809662ae1ca8168a9cdd42ed5b07fb545018c1f66640bd3148ec9f5c20aa8ce"
	Nov 01 00:46:00 pause-582989 kubelet[1267]: I1101 00:46:00.807344    1267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a486ebfbadaf73959d212e236f258ca733be27285f0bb18593257f306d463d7"
	Nov 01 00:46:00 pause-582989 kubelet[1267]: I1101 00:46:00.814345    1267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a5a9f64e392e71f23e2f8d636a7b8332fd7129cf1cef0414a38e5f445008dc6"
	Nov 01 00:46:00 pause-582989 kubelet[1267]: I1101 00:46:00.844524    1267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06a1a64dd52b65484827de8483fedc099c22799f946c7791e841d96918ef3b35"
	Nov 01 00:46:01 pause-582989 kubelet[1267]: I1101 00:46:01.542389    1267 status_manager.go:853] "Failed to get status for pod" podUID="58defa582c316c79b3d8f3f2b1f06708" pod="kube-system/etcd-pause-582989" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-582989\": dial tcp 192.168.83.166:8443: connect: connection refused"
	Nov 01 00:46:01 pause-582989 kubelet[1267]: I1101 00:46:01.543836    1267 status_manager.go:853] "Failed to get status for pod" podUID="2dfdf36a-3ee4-4786-9d57-131962bc4c88" pod="kube-system/kube-proxy-l9kk6" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l9kk6\": dial tcp 192.168.83.166:8443: connect: connection refused"
	Nov 01 00:46:01 pause-582989 kubelet[1267]: I1101 00:46:01.545206    1267 status_manager.go:853] "Failed to get status for pod" podUID="4b4ae73e-212e-4a24-a6d7-25ab15186ca8" pod="kube-system/coredns-5dd5756b68-f45gz" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-f45gz\": dial tcp 192.168.83.166:8443: connect: connection refused"
	Nov 01 00:46:01 pause-582989 kubelet[1267]: I1101 00:46:01.545955    1267 status_manager.go:853] "Failed to get status for pod" podUID="6c1ae1fffd851eec445a886d4c3ef691" pod="kube-system/kube-scheduler-pause-582989" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-582989\": dial tcp 192.168.83.166:8443: connect: connection refused"
	Nov 01 00:46:01 pause-582989 kubelet[1267]: I1101 00:46:01.546736    1267 status_manager.go:853] "Failed to get status for pod" podUID="2800027ab3fa7a2334199a818fc36bcd" pod="kube-system/kube-controller-manager-pause-582989" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-582989\": dial tcp 192.168.83.166:8443: connect: connection refused"
	Nov 01 00:46:01 pause-582989 kubelet[1267]: I1101 00:46:01.547350    1267 status_manager.go:853] "Failed to get status for pod" podUID="eeefc617d942f43c20a82588725d37c1" pod="kube-system/kube-apiserver-pause-582989" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-582989\": dial tcp 192.168.83.166:8443: connect: connection refused"
	Nov 01 00:46:02 pause-582989 kubelet[1267]: E1101 00:46:02.500010    1267 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-582989\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-582989?resourceVersion=0&timeout=10s\": dial tcp 192.168.83.166:8443: connect: connection refused"
	Nov 01 00:46:02 pause-582989 kubelet[1267]: E1101 00:46:02.500278    1267 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-582989\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-582989?timeout=10s\": dial tcp 192.168.83.166:8443: connect: connection refused"
	Nov 01 00:46:02 pause-582989 kubelet[1267]: E1101 00:46:02.500442    1267 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-582989\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-582989?timeout=10s\": dial tcp 192.168.83.166:8443: connect: connection refused"
	Nov 01 00:46:02 pause-582989 kubelet[1267]: E1101 00:46:02.500643    1267 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-582989\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-582989?timeout=10s\": dial tcp 192.168.83.166:8443: connect: connection refused"
	Nov 01 00:46:02 pause-582989 kubelet[1267]: E1101 00:46:02.500897    1267 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-582989\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-582989?timeout=10s\": dial tcp 192.168.83.166:8443: connect: connection refused"
	Nov 01 00:46:02 pause-582989 kubelet[1267]: E1101 00:46:02.500933    1267 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count"
	Nov 01 00:46:07 pause-582989 kubelet[1267]: E1101 00:46:07.947981    1267 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Nov 01 00:46:12 pause-582989 kubelet[1267]: I1101 00:46:12.568413    1267 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 00:46:12 pause-582989 kubelet[1267]: I1101 00:46:12.569576    1267 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 00:46:24.360835   44290 logs.go:266] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/17486-7305/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-582989 -n pause-582989
helpers_test.go:261: (dbg) Run:  kubectl --context pause-582989 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-582989 -n pause-582989
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-582989 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-582989 logs -n 25: (1.489589636s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p cilium-090856 sudo                  | cilium-090856             | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC |                     |
	|         | systemctl cat containerd               |                           |         |                |                     |                     |
	|         | --no-pager                             |                           |         |                |                     |                     |
	| ssh     | -p cilium-090856 sudo cat              | cilium-090856             | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |                |                     |                     |
	| ssh     | -p cilium-090856 sudo cat              | cilium-090856             | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |                |                     |                     |
	| ssh     | -p cilium-090856 sudo                  | cilium-090856             | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC |                     |
	|         | containerd config dump                 |                           |         |                |                     |                     |
	| ssh     | -p cilium-090856 sudo                  | cilium-090856             | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC |                     |
	|         | systemctl status crio --all            |                           |         |                |                     |                     |
	|         | --full --no-pager                      |                           |         |                |                     |                     |
	| ssh     | -p cilium-090856 sudo                  | cilium-090856             | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |                |                     |                     |
	| ssh     | -p cilium-090856 sudo find             | cilium-090856             | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |                |                     |                     |
	| ssh     | -p cilium-090856 sudo crio             | cilium-090856             | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC |                     |
	|         | config                                 |                           |         |                |                     |                     |
	| delete  | -p cilium-090856                       | cilium-090856             | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	| delete  | -p force-systemd-env-256488            | force-systemd-env-256488  | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	| start   | -p cert-expiration-902201              | cert-expiration-902201    | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:44 UTC |
	|         | --memory=2048                          |                           |         |                |                     |                     |
	|         | --cert-expiration=3m                   |                           |         |                |                     |                     |
	|         | --driver=kvm2                          |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	| ssh     | -p NoKubernetes-345470 sudo            | NoKubernetes-345470       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC |                     |
	|         | systemctl is-active --quiet            |                           |         |                |                     |                     |
	|         | service kubelet                        |                           |         |                |                     |                     |
	| delete  | -p NoKubernetes-345470                 | NoKubernetes-345470       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	| delete  | -p running-upgrade-411881              | running-upgrade-411881    | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	| start   | -p force-systemd-flag-644407           | force-systemd-flag-644407 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:44 UTC |
	|         | --memory=2048 --force-systemd          |                           |         |                |                     |                     |
	|         | --alsologtostderr                      |                           |         |                |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	| start   | -p cert-options-406160                 | cert-options-406160       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:45 UTC |
	|         | --memory=2048                          |                           |         |                |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                           |         |                |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                           |         |                |                     |                     |
	|         | --apiserver-names=localhost            |                           |         |                |                     |                     |
	|         | --apiserver-names=www.google.com       |                           |         |                |                     |                     |
	|         | --apiserver-port=8555                  |                           |         |                |                     |                     |
	|         | --driver=kvm2                          |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	| ssh     | force-systemd-flag-644407 ssh cat      | force-systemd-flag-644407 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:44 UTC | 01 Nov 23 00:44 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |                |                     |                     |
	| delete  | -p force-systemd-flag-644407           | force-systemd-flag-644407 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:44 UTC | 01 Nov 23 00:44 UTC |
	| start   | -p pause-582989 --memory=2048          | pause-582989              | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:44 UTC | 01 Nov 23 00:45 UTC |
	|         | --install-addons=false                 |                           |         |                |                     |                     |
	|         | --wait=all --driver=kvm2               |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	| ssh     | cert-options-406160 ssh                | cert-options-406160       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:45 UTC | 01 Nov 23 00:45 UTC |
	|         | openssl x509 -text -noout -in          |                           |         |                |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                           |         |                |                     |                     |
	| ssh     | -p cert-options-406160 -- sudo         | cert-options-406160       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:45 UTC | 01 Nov 23 00:45 UTC |
	|         | cat /etc/kubernetes/admin.conf         |                           |         |                |                     |                     |
	| delete  | -p cert-options-406160                 | cert-options-406160       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:45 UTC | 01 Nov 23 00:45 UTC |
	| start   | -p auto-090856 --memory=3072           | auto-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:45 UTC | 01 Nov 23 00:46 UTC |
	|         | --alsologtostderr --wait=true          |                           |         |                |                     |                     |
	|         | --wait-timeout=15m                     |                           |         |                |                     |                     |
	|         | --driver=kvm2                          |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	| start   | -p pause-582989                        | pause-582989              | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:45 UTC | 01 Nov 23 00:46 UTC |
	|         | --alsologtostderr                      |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	| ssh     | -p auto-090856 pgrep -a                | auto-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:46 UTC | 01 Nov 23 00:46 UTC |
	|         | kubelet                                |                           |         |                |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/01 00:45:49
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 00:45:49.909326   44050 out.go:296] Setting OutFile to fd 1 ...
	I1101 00:45:49.909495   44050 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:45:49.909508   44050 out.go:309] Setting ErrFile to fd 2...
	I1101 00:45:49.909515   44050 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:45:49.909695   44050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7305/.minikube/bin
	I1101 00:45:49.910253   44050 out.go:303] Setting JSON to false
	I1101 00:45:49.911273   44050 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5295,"bootTime":1698794255,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 00:45:49.911333   44050 start.go:138] virtualization: kvm guest
	I1101 00:45:49.914161   44050 out.go:177] * [pause-582989] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1101 00:45:49.916142   44050 out.go:177]   - MINIKUBE_LOCATION=17486
	I1101 00:45:49.916194   44050 notify.go:220] Checking for updates...
	I1101 00:45:49.917984   44050 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 00:45:49.919812   44050 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 00:45:49.921892   44050 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7305/.minikube
	I1101 00:45:49.923461   44050 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 00:45:49.925250   44050 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 00:45:49.928057   44050 config.go:182] Loaded profile config "pause-582989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:45:49.928572   44050 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 00:45:49.928634   44050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:45:49.945908   44050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37087
	I1101 00:45:49.946410   44050 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:45:49.946998   44050 main.go:141] libmachine: Using API Version  1
	I1101 00:45:49.947033   44050 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:45:49.947404   44050 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:45:49.947608   44050 main.go:141] libmachine: (pause-582989) Calling .DriverName
	I1101 00:45:49.947910   44050 driver.go:378] Setting default libvirt URI to qemu:///system
	I1101 00:45:49.948350   44050 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 00:45:49.948409   44050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:45:49.963546   44050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34707
	I1101 00:45:49.964078   44050 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:45:49.964647   44050 main.go:141] libmachine: Using API Version  1
	I1101 00:45:49.964669   44050 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:45:49.965098   44050 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:45:49.965306   44050 main.go:141] libmachine: (pause-582989) Calling .DriverName
	I1101 00:45:50.008360   44050 out.go:177] * Using the kvm2 driver based on existing profile
	I1101 00:45:50.009822   44050 start.go:298] selected driver: kvm2
	I1101 00:45:50.009838   44050 start.go:902] validating driver "kvm2" against &{Name:pause-582989 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.3 ClusterName:pause-582989 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.166 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-install
er:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:45:50.010008   44050 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 00:45:50.010304   44050 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:45:50.010402   44050 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17486-7305/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1101 00:45:50.028420   44050 install.go:137] /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1101 00:45:50.029511   44050 cni.go:84] Creating CNI manager for ""
	I1101 00:45:50.029536   44050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 00:45:50.029553   44050 start_flags.go:323] config:
	{Name:pause-582989 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:pause-582989 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.166 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false po
rtainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:45:50.029841   44050 iso.go:125] acquiring lock: {Name:mk1f649ca0b7c1ae293cd66cb85f9eeda028b20b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:45:50.033051   44050 out.go:177] * Starting control plane node pause-582989 in cluster pause-582989
	I1101 00:45:50.035168   44050 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 00:45:50.035230   44050 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1101 00:45:50.035239   44050 cache.go:56] Caching tarball of preloaded images
	I1101 00:45:50.035369   44050 preload.go:174] Found /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 00:45:50.035385   44050 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1101 00:45:50.035580   44050 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/pause-582989/config.json ...
	I1101 00:45:50.035871   44050 start.go:365] acquiring machines lock for pause-582989: {Name:mk7aad88408c319111b9be8e59d9593a9e88374b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 00:45:50.035957   44050 start.go:369] acquired machines lock for "pause-582989" in 52.258µs
	I1101 00:45:50.035988   44050 start.go:96] Skipping create...Using existing machine configuration
	I1101 00:45:50.035999   44050 fix.go:54] fixHost starting: 
	I1101 00:45:50.036380   44050 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 00:45:50.036429   44050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:45:50.052890   44050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35841
	I1101 00:45:50.053422   44050 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:45:50.053985   44050 main.go:141] libmachine: Using API Version  1
	I1101 00:45:50.054013   44050 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:45:50.054336   44050 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:45:50.054692   44050 main.go:141] libmachine: (pause-582989) Calling .DriverName
	I1101 00:45:50.054869   44050 main.go:141] libmachine: (pause-582989) Calling .GetState
	I1101 00:45:50.056720   44050 fix.go:102] recreateIfNeeded on pause-582989: state=Running err=<nil>
	W1101 00:45:50.056743   44050 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 00:45:50.058773   44050 out.go:177] * Updating the running kvm2 "pause-582989" VM ...
	I1101 00:45:47.724719   43726 out.go:204]   - Booting up control plane ...
	I1101 00:45:47.724885   43726 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 00:45:47.725029   43726 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 00:45:47.725682   43726 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 00:45:47.742040   43726 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 00:45:47.743389   43726 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 00:45:47.743476   43726 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1101 00:45:47.880397   43726 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 00:45:50.060316   44050 machine.go:88] provisioning docker machine ...
	I1101 00:45:50.060354   44050 main.go:141] libmachine: (pause-582989) Calling .DriverName
	I1101 00:45:50.060643   44050 main.go:141] libmachine: (pause-582989) Calling .GetMachineName
	I1101 00:45:50.060815   44050 buildroot.go:166] provisioning hostname "pause-582989"
	I1101 00:45:50.060848   44050 main.go:141] libmachine: (pause-582989) Calling .GetMachineName
	I1101 00:45:50.061018   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHHostname
	I1101 00:45:50.064116   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:50.064710   44050 main.go:141] libmachine: (pause-582989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:43:81", ip: ""} in network mk-pause-582989: {Iface:virbr3 ExpiryTime:2023-11-01 01:45:02 +0000 UTC Type:0 Mac:52:54:00:5e:43:81 Iaid: IPaddr:192.168.83.166 Prefix:24 Hostname:pause-582989 Clientid:01:52:54:00:5e:43:81}
	I1101 00:45:50.064743   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined IP address 192.168.83.166 and MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:50.064941   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHPort
	I1101 00:45:50.065146   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHKeyPath
	I1101 00:45:50.065305   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHKeyPath
	I1101 00:45:50.065474   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHUsername
	I1101 00:45:50.065669   44050 main.go:141] libmachine: Using SSH client type: native
	I1101 00:45:50.066043   44050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.83.166 22 <nil> <nil>}
	I1101 00:45:50.066062   44050 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-582989 && echo "pause-582989" | sudo tee /etc/hostname
	I1101 00:45:50.201985   44050 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-582989
	
	I1101 00:45:50.202025   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHHostname
	I1101 00:45:50.205426   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:50.205808   44050 main.go:141] libmachine: (pause-582989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:43:81", ip: ""} in network mk-pause-582989: {Iface:virbr3 ExpiryTime:2023-11-01 01:45:02 +0000 UTC Type:0 Mac:52:54:00:5e:43:81 Iaid: IPaddr:192.168.83.166 Prefix:24 Hostname:pause-582989 Clientid:01:52:54:00:5e:43:81}
	I1101 00:45:50.205854   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined IP address 192.168.83.166 and MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:50.206080   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHPort
	I1101 00:45:50.206277   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHKeyPath
	I1101 00:45:50.206421   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHKeyPath
	I1101 00:45:50.206657   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHUsername
	I1101 00:45:50.206879   44050 main.go:141] libmachine: Using SSH client type: native
	I1101 00:45:50.207376   44050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.83.166 22 <nil> <nil>}
	I1101 00:45:50.207404   44050 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-582989' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-582989/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-582989' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 00:45:50.337919   44050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 00:45:50.337952   44050 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7305/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7305/.minikube}
	I1101 00:45:50.337977   44050 buildroot.go:174] setting up certificates
	I1101 00:45:50.337987   44050 provision.go:83] configureAuth start
	I1101 00:45:50.337997   44050 main.go:141] libmachine: (pause-582989) Calling .GetMachineName
	I1101 00:45:50.338365   44050 main.go:141] libmachine: (pause-582989) Calling .GetIP
	I1101 00:45:50.341882   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:50.342333   44050 main.go:141] libmachine: (pause-582989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:43:81", ip: ""} in network mk-pause-582989: {Iface:virbr3 ExpiryTime:2023-11-01 01:45:02 +0000 UTC Type:0 Mac:52:54:00:5e:43:81 Iaid: IPaddr:192.168.83.166 Prefix:24 Hostname:pause-582989 Clientid:01:52:54:00:5e:43:81}
	I1101 00:45:50.342380   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined IP address 192.168.83.166 and MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:50.342706   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHHostname
	I1101 00:45:50.345420   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:50.345844   44050 main.go:141] libmachine: (pause-582989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:43:81", ip: ""} in network mk-pause-582989: {Iface:virbr3 ExpiryTime:2023-11-01 01:45:02 +0000 UTC Type:0 Mac:52:54:00:5e:43:81 Iaid: IPaddr:192.168.83.166 Prefix:24 Hostname:pause-582989 Clientid:01:52:54:00:5e:43:81}
	I1101 00:45:50.345877   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined IP address 192.168.83.166 and MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:50.346056   44050 provision.go:138] copyHostCerts
	I1101 00:45:50.346132   44050 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem, removing ...
	I1101 00:45:50.346157   44050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 00:45:50.346232   44050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem (1078 bytes)
	I1101 00:45:50.346420   44050 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem, removing ...
	I1101 00:45:50.346436   44050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 00:45:50.346469   44050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem (1123 bytes)
	I1101 00:45:50.346554   44050 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem, removing ...
	I1101 00:45:50.346567   44050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 00:45:50.346601   44050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem (1675 bytes)
	I1101 00:45:50.346670   44050 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem org=jenkins.pause-582989 san=[192.168.83.166 192.168.83.166 localhost 127.0.0.1 minikube pause-582989]
	I1101 00:45:50.484167   44050 provision.go:172] copyRemoteCerts
	I1101 00:45:50.484224   44050 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 00:45:50.484247   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHHostname
	I1101 00:45:50.487514   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:50.488020   44050 main.go:141] libmachine: (pause-582989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:43:81", ip: ""} in network mk-pause-582989: {Iface:virbr3 ExpiryTime:2023-11-01 01:45:02 +0000 UTC Type:0 Mac:52:54:00:5e:43:81 Iaid: IPaddr:192.168.83.166 Prefix:24 Hostname:pause-582989 Clientid:01:52:54:00:5e:43:81}
	I1101 00:45:50.488062   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined IP address 192.168.83.166 and MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:50.488305   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHPort
	I1101 00:45:50.488511   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHKeyPath
	I1101 00:45:50.488686   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHUsername
	I1101 00:45:50.488843   44050 sshutil.go:53] new ssh client: &{IP:192.168.83.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/pause-582989/id_rsa Username:docker}
	I1101 00:45:50.581873   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1101 00:45:50.613798   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 00:45:50.649371   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 00:45:50.679006   44050 provision.go:86] duration metric: configureAuth took 341.003037ms
	I1101 00:45:50.679040   44050 buildroot.go:189] setting minikube options for container-runtime
	I1101 00:45:50.679330   44050 config.go:182] Loaded profile config "pause-582989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:45:50.679427   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHHostname
	I1101 00:45:50.682431   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:50.682957   44050 main.go:141] libmachine: (pause-582989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:43:81", ip: ""} in network mk-pause-582989: {Iface:virbr3 ExpiryTime:2023-11-01 01:45:02 +0000 UTC Type:0 Mac:52:54:00:5e:43:81 Iaid: IPaddr:192.168.83.166 Prefix:24 Hostname:pause-582989 Clientid:01:52:54:00:5e:43:81}
	I1101 00:45:50.683002   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined IP address 192.168.83.166 and MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:50.683289   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHPort
	I1101 00:45:50.683552   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHKeyPath
	I1101 00:45:50.683737   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHKeyPath
	I1101 00:45:50.683976   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHUsername
	I1101 00:45:50.684215   44050 main.go:141] libmachine: Using SSH client type: native
	I1101 00:45:50.684701   44050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.83.166 22 <nil> <nil>}
	I1101 00:45:50.684728   44050 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 00:45:56.379818   43726 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503810 seconds
	I1101 00:45:56.380016   43726 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 00:45:56.401566   43726 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 00:45:56.938430   43726 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 00:45:56.938734   43726 kubeadm.go:322] [mark-control-plane] Marking the node auto-090856 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 00:45:57.455120   43726 kubeadm.go:322] [bootstrap-token] Using token: b6mxf3.rs0dinkr1zyirwe5
	I1101 00:45:57.456672   43726 out.go:204]   - Configuring RBAC rules ...
	I1101 00:45:57.456826   43726 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 00:45:57.465111   43726 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 00:45:57.474810   43726 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 00:45:57.488186   43726 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 00:45:57.493372   43726 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 00:45:57.501819   43726 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 00:45:57.522367   43726 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 00:45:57.833204   43726 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1101 00:45:57.878960   43726 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1101 00:45:57.880180   43726 kubeadm.go:322] 
	I1101 00:45:57.880271   43726 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1101 00:45:57.880283   43726 kubeadm.go:322] 
	I1101 00:45:57.880403   43726 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1101 00:45:57.880436   43726 kubeadm.go:322] 
	I1101 00:45:57.880528   43726 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1101 00:45:57.880625   43726 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 00:45:57.880711   43726 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 00:45:57.880727   43726 kubeadm.go:322] 
	I1101 00:45:57.880835   43726 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1101 00:45:57.880851   43726 kubeadm.go:322] 
	I1101 00:45:57.880910   43726 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 00:45:57.880920   43726 kubeadm.go:322] 
	I1101 00:45:57.880984   43726 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1101 00:45:57.881071   43726 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 00:45:57.881150   43726 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 00:45:57.881157   43726 kubeadm.go:322] 
	I1101 00:45:57.881256   43726 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 00:45:57.881350   43726 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1101 00:45:57.881358   43726 kubeadm.go:322] 
	I1101 00:45:57.881465   43726 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token b6mxf3.rs0dinkr1zyirwe5 \
	I1101 00:45:57.881604   43726 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 \
	I1101 00:45:57.881629   43726 kubeadm.go:322] 	--control-plane 
	I1101 00:45:57.881635   43726 kubeadm.go:322] 
	I1101 00:45:57.881736   43726 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1101 00:45:57.881743   43726 kubeadm.go:322] 
	I1101 00:45:57.881855   43726 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token b6mxf3.rs0dinkr1zyirwe5 \
	I1101 00:45:57.882000   43726 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 
	I1101 00:45:57.882155   43726 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 00:45:57.882171   43726 cni.go:84] Creating CNI manager for ""
	I1101 00:45:57.882180   43726 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 00:45:57.883936   43726 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 00:45:56.347433   44050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 00:45:56.347476   44050 machine.go:91] provisioned docker machine in 6.28713445s
	I1101 00:45:56.347489   44050 start.go:300] post-start starting for "pause-582989" (driver="kvm2")
	I1101 00:45:56.347502   44050 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 00:45:56.347526   44050 main.go:141] libmachine: (pause-582989) Calling .DriverName
	I1101 00:45:56.348049   44050 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 00:45:56.348077   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHHostname
	I1101 00:45:56.351396   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:56.351841   44050 main.go:141] libmachine: (pause-582989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:43:81", ip: ""} in network mk-pause-582989: {Iface:virbr3 ExpiryTime:2023-11-01 01:45:02 +0000 UTC Type:0 Mac:52:54:00:5e:43:81 Iaid: IPaddr:192.168.83.166 Prefix:24 Hostname:pause-582989 Clientid:01:52:54:00:5e:43:81}
	I1101 00:45:56.351875   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined IP address 192.168.83.166 and MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:56.352095   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHPort
	I1101 00:45:56.352316   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHKeyPath
	I1101 00:45:56.352470   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHUsername
	I1101 00:45:56.352624   44050 sshutil.go:53] new ssh client: &{IP:192.168.83.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/pause-582989/id_rsa Username:docker}
	I1101 00:45:56.445948   44050 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 00:45:56.450833   44050 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 00:45:56.450865   44050 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/addons for local assets ...
	I1101 00:45:56.450968   44050 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/files for local assets ...
	I1101 00:45:56.451060   44050 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> 145042.pem in /etc/ssl/certs
	I1101 00:45:56.451177   44050 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 00:45:56.460662   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /etc/ssl/certs/145042.pem (1708 bytes)
	I1101 00:45:56.484778   44050 start.go:303] post-start completed in 137.270757ms
	I1101 00:45:56.484811   44050 fix.go:56] fixHost completed within 6.448811795s
	I1101 00:45:56.484840   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHHostname
	I1101 00:45:56.488153   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:56.488557   44050 main.go:141] libmachine: (pause-582989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:43:81", ip: ""} in network mk-pause-582989: {Iface:virbr3 ExpiryTime:2023-11-01 01:45:02 +0000 UTC Type:0 Mac:52:54:00:5e:43:81 Iaid: IPaddr:192.168.83.166 Prefix:24 Hostname:pause-582989 Clientid:01:52:54:00:5e:43:81}
	I1101 00:45:56.488593   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined IP address 192.168.83.166 and MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:56.488765   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHPort
	I1101 00:45:56.488978   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHKeyPath
	I1101 00:45:56.489134   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHKeyPath
	I1101 00:45:56.489304   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHUsername
	I1101 00:45:56.489427   44050 main.go:141] libmachine: Using SSH client type: native
	I1101 00:45:56.489776   44050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.83.166 22 <nil> <nil>}
	I1101 00:45:56.489795   44050 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1101 00:45:56.609023   44050 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698799556.602533926
	
	I1101 00:45:56.609053   44050 fix.go:206] guest clock: 1698799556.602533926
	I1101 00:45:56.609064   44050 fix.go:219] Guest: 2023-11-01 00:45:56.602533926 +0000 UTC Remote: 2023-11-01 00:45:56.484817337 +0000 UTC m=+6.632414356 (delta=117.716589ms)
	I1101 00:45:56.609102   44050 fix.go:190] guest clock delta is within tolerance: 117.716589ms
	I1101 00:45:56.609107   44050 start.go:83] releasing machines lock for "pause-582989", held for 6.573137262s
	I1101 00:45:56.609128   44050 main.go:141] libmachine: (pause-582989) Calling .DriverName
	I1101 00:45:56.609414   44050 main.go:141] libmachine: (pause-582989) Calling .GetIP
	I1101 00:45:56.612117   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:56.612457   44050 main.go:141] libmachine: (pause-582989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:43:81", ip: ""} in network mk-pause-582989: {Iface:virbr3 ExpiryTime:2023-11-01 01:45:02 +0000 UTC Type:0 Mac:52:54:00:5e:43:81 Iaid: IPaddr:192.168.83.166 Prefix:24 Hostname:pause-582989 Clientid:01:52:54:00:5e:43:81}
	I1101 00:45:56.612497   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined IP address 192.168.83.166 and MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:56.612654   44050 main.go:141] libmachine: (pause-582989) Calling .DriverName
	I1101 00:45:56.613281   44050 main.go:141] libmachine: (pause-582989) Calling .DriverName
	I1101 00:45:56.613485   44050 main.go:141] libmachine: (pause-582989) Calling .DriverName
	I1101 00:45:56.613597   44050 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 00:45:56.613645   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHHostname
	I1101 00:45:56.613766   44050 ssh_runner.go:195] Run: cat /version.json
	I1101 00:45:56.613793   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHHostname
	I1101 00:45:56.616874   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:56.617182   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:56.617394   44050 main.go:141] libmachine: (pause-582989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:43:81", ip: ""} in network mk-pause-582989: {Iface:virbr3 ExpiryTime:2023-11-01 01:45:02 +0000 UTC Type:0 Mac:52:54:00:5e:43:81 Iaid: IPaddr:192.168.83.166 Prefix:24 Hostname:pause-582989 Clientid:01:52:54:00:5e:43:81}
	I1101 00:45:56.617425   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined IP address 192.168.83.166 and MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:56.617611   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHPort
	I1101 00:45:56.617682   44050 main.go:141] libmachine: (pause-582989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:43:81", ip: ""} in network mk-pause-582989: {Iface:virbr3 ExpiryTime:2023-11-01 01:45:02 +0000 UTC Type:0 Mac:52:54:00:5e:43:81 Iaid: IPaddr:192.168.83.166 Prefix:24 Hostname:pause-582989 Clientid:01:52:54:00:5e:43:81}
	I1101 00:45:56.617711   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined IP address 192.168.83.166 and MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:56.617811   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHKeyPath
	I1101 00:45:56.617894   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHPort
	I1101 00:45:56.618101   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHUsername
	I1101 00:45:56.618146   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHKeyPath
	I1101 00:45:56.618263   44050 sshutil.go:53] new ssh client: &{IP:192.168.83.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/pause-582989/id_rsa Username:docker}
	I1101 00:45:56.618325   44050 main.go:141] libmachine: (pause-582989) Calling .GetSSHUsername
	I1101 00:45:56.618450   44050 sshutil.go:53] new ssh client: &{IP:192.168.83.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/pause-582989/id_rsa Username:docker}
	I1101 00:45:56.700729   44050 ssh_runner.go:195] Run: systemctl --version
	I1101 00:45:56.743523   44050 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 00:45:56.889963   44050 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 00:45:56.895974   44050 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 00:45:56.896064   44050 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 00:45:56.904255   44050 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 00:45:56.904279   44050 start.go:472] detecting cgroup driver to use...
	I1101 00:45:56.904362   44050 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 00:45:56.921160   44050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 00:45:56.934532   44050 docker.go:204] disabling cri-docker service (if available) ...
	I1101 00:45:56.934617   44050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 00:45:56.950382   44050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 00:45:56.969459   44050 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 00:45:57.131197   44050 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 00:45:57.273543   44050 docker.go:220] disabling docker service ...
	I1101 00:45:57.273629   44050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 00:45:57.288819   44050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 00:45:57.303092   44050 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 00:45:57.436436   44050 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 00:45:57.587474   44050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 00:45:57.604128   44050 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 00:45:57.627958   44050 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 00:45:57.628031   44050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:45:57.638416   44050 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 00:45:57.638501   44050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:45:57.648465   44050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:45:57.661454   44050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:45:57.817342   44050 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 00:45:57.939500   44050 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 00:45:58.002467   44050 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 00:45:58.051453   44050 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:45:58.281189   44050 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 00:45:59.808061   44050 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.526830778s)
	I1101 00:45:59.808102   44050 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 00:45:59.808170   44050 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 00:45:59.817014   44050 start.go:540] Will wait 60s for crictl version
	I1101 00:45:59.817080   44050 ssh_runner.go:195] Run: which crictl
	I1101 00:45:59.821053   44050 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 00:45:59.864699   44050 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1101 00:45:59.864803   44050 ssh_runner.go:195] Run: crio --version
	I1101 00:45:59.920105   44050 ssh_runner.go:195] Run: crio --version
	I1101 00:45:59.986161   44050 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1101 00:45:57.885324   43726 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 00:45:57.910278   43726 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 00:45:57.981299   43726 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 00:45:57.981383   43726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:45:57.981389   43726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9 minikube.k8s.io/name=auto-090856 minikube.k8s.io/updated_at=2023_11_01T00_45_57_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:45:58.337629   43726 ops.go:34] apiserver oom_adj: -16
	I1101 00:45:58.337744   43726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:45:58.486683   43726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:45:59.092793   43726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:45:59.593134   43726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:00.092629   43726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:45:59.987762   44050 main.go:141] libmachine: (pause-582989) Calling .GetIP
	I1101 00:45:59.990782   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:59.991170   44050 main.go:141] libmachine: (pause-582989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:43:81", ip: ""} in network mk-pause-582989: {Iface:virbr3 ExpiryTime:2023-11-01 01:45:02 +0000 UTC Type:0 Mac:52:54:00:5e:43:81 Iaid: IPaddr:192.168.83.166 Prefix:24 Hostname:pause-582989 Clientid:01:52:54:00:5e:43:81}
	I1101 00:45:59.991203   44050 main.go:141] libmachine: (pause-582989) DBG | domain pause-582989 has defined IP address 192.168.83.166 and MAC address 52:54:00:5e:43:81 in network mk-pause-582989
	I1101 00:45:59.991488   44050 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1101 00:45:59.996171   44050 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 00:45:59.996220   44050 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 00:46:00.043819   44050 crio.go:496] all images are preloaded for cri-o runtime.
	I1101 00:46:00.043844   44050 crio.go:415] Images already preloaded, skipping extraction
	I1101 00:46:00.043906   44050 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 00:46:00.081400   44050 crio.go:496] all images are preloaded for cri-o runtime.
	I1101 00:46:00.081424   44050 cache_images.go:84] Images are preloaded, skipping loading
	I1101 00:46:00.081490   44050 ssh_runner.go:195] Run: crio config
	I1101 00:46:00.159192   44050 cni.go:84] Creating CNI manager for ""
	I1101 00:46:00.159222   44050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 00:46:00.159243   44050 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 00:46:00.159268   44050 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.166 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-582989 NodeName:pause-582989 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.166"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.166 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 00:46:00.159429   44050 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.166
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-582989"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.166
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.166"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 00:46:00.159538   44050 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=pause-582989 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.166
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:pause-582989 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 00:46:00.159625   44050 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 00:46:00.170620   44050 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 00:46:00.170715   44050 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 00:46:00.180616   44050 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1101 00:46:00.199065   44050 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 00:46:00.216665   44050 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I1101 00:46:00.233251   44050 ssh_runner.go:195] Run: grep 192.168.83.166	control-plane.minikube.internal$ /etc/hosts
	I1101 00:46:00.237214   44050 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/pause-582989 for IP: 192.168.83.166
	I1101 00:46:00.237265   44050 certs.go:190] acquiring lock for shared ca certs: {Name:mk6185b3938c4b51e80f57b7c81926c81b632b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:46:00.237412   44050 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key
	I1101 00:46:00.237459   44050 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key
	I1101 00:46:00.237545   44050 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/pause-582989/client.key
	I1101 00:46:00.237610   44050 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/pause-582989/apiserver.key.bb7cef09
	I1101 00:46:00.237655   44050 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/pause-582989/proxy-client.key
	I1101 00:46:00.237753   44050 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem (1338 bytes)
	W1101 00:46:00.237793   44050 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504_empty.pem, impossibly tiny 0 bytes
	I1101 00:46:00.237806   44050 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 00:46:00.237830   44050 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem (1078 bytes)
	I1101 00:46:00.237854   44050 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem (1123 bytes)
	I1101 00:46:00.237875   44050 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem (1675 bytes)
	I1101 00:46:00.237914   44050 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem (1708 bytes)
	I1101 00:46:00.238471   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/pause-582989/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 00:46:00.261957   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/pause-582989/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 00:46:00.287195   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/pause-582989/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 00:46:00.310489   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/pause-582989/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 00:46:00.333993   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 00:46:00.358502   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 00:46:00.384471   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 00:46:00.408632   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 00:46:00.433073   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem --> /usr/share/ca-certificates/14504.pem (1338 bytes)
	I1101 00:46:00.456266   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /usr/share/ca-certificates/145042.pem (1708 bytes)
	I1101 00:46:00.480370   44050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 00:46:00.509229   44050 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 00:46:00.529051   44050 ssh_runner.go:195] Run: openssl version
	I1101 00:46:00.536119   44050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14504.pem && ln -fs /usr/share/ca-certificates/14504.pem /etc/ssl/certs/14504.pem"
	I1101 00:46:00.549020   44050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14504.pem
	I1101 00:46:00.554446   44050 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:54 /usr/share/ca-certificates/14504.pem
	I1101 00:46:00.554538   44050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14504.pem
	I1101 00:46:00.560535   44050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14504.pem /etc/ssl/certs/51391683.0"
	I1101 00:46:00.570908   44050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145042.pem && ln -fs /usr/share/ca-certificates/145042.pem /etc/ssl/certs/145042.pem"
	I1101 00:46:00.581251   44050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145042.pem
	I1101 00:46:00.585650   44050 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:54 /usr/share/ca-certificates/145042.pem
	I1101 00:46:00.585727   44050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145042.pem
	I1101 00:46:00.591189   44050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145042.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 00:46:00.603293   44050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 00:46:00.617172   44050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:46:00.623315   44050 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:46:00.623376   44050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:46:00.630382   44050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 00:46:00.640163   44050 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 00:46:00.644574   44050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 00:46:00.651077   44050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 00:46:00.658672   44050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 00:46:00.666213   44050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 00:46:00.673810   44050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 00:46:00.681113   44050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 00:46:00.687062   44050 kubeadm.go:404] StartCluster: {Name:pause-582989 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.3 ClusterName:pause-582989 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.166 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gp
u-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:46:00.687198   44050 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 00:46:00.687255   44050 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 00:46:00.729114   44050 cri.go:89] found id: "e6064425f28dda71209a4dd39d96349d4c310d45fc4827fb223f3e68b9298be6"
	I1101 00:46:00.729136   44050 cri.go:89] found id: "5ef0c9df2cfa1eddf797e9cf626df84a44695ddca78bc29dbaca6cc572a2bd1f"
	I1101 00:46:00.729140   44050 cri.go:89] found id: "83ba5cf1f9ad384bdf7e669ae56276f532bedc64d0f61ac07a53d1079a0c29e3"
	I1101 00:46:00.729145   44050 cri.go:89] found id: "1201aea9235f1fdc9c9b623c75a375f4cc07d43a0bacac462dbb5ab5d01dded9"
	I1101 00:46:00.729148   44050 cri.go:89] found id: "cab178bec38f9a3c5e477d390c044d5d08cf63740f84beb5df8499a8074bad2b"
	I1101 00:46:00.729154   44050 cri.go:89] found id: "aa2851400cfbff707064071609e4b6e34a1316788b5409e084e8d29882ab2e45"
	I1101 00:46:00.729158   44050 cri.go:89] found id: ""
	I1101 00:46:00.729199   44050 ssh_runner.go:195] Run: sudo runc list -f json
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-11-01 00:44:58 UTC, ends at Wed 2023-11-01 00:46:27 UTC. --
	Nov 01 00:46:26 pause-582989 crio[2357]: time="2023-11-01 00:46:26.906691735Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=fd6e949e-5194-4c3a-9169-cb82959942c1 name=/runtime.v1.RuntimeService/Version
	Nov 01 00:46:26 pause-582989 crio[2357]: time="2023-11-01 00:46:26.908630262Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=93478b39-9f9a-4b3d-8413-901f439e328b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 00:46:26 pause-582989 crio[2357]: time="2023-11-01 00:46:26.909100188Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698799586909083807,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=93478b39-9f9a-4b3d-8413-901f439e328b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 00:46:26 pause-582989 crio[2357]: time="2023-11-01 00:46:26.909642514Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e444e1c3-7fc3-4b9c-affe-01c46d12a965 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:46:26 pause-582989 crio[2357]: time="2023-11-01 00:46:26.909731614Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e444e1c3-7fc3-4b9c-affe-01c46d12a965 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:46:26 pause-582989 crio[2357]: time="2023-11-01 00:46:26.910058185Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cda943c34c9fa9fa8ba74e1ba2e3586a69606f53cb45fd92e2fd4954a82e6677,PodSandboxId:8fd2822663467c1a5ed22c5a835b36c705604ab0ebd5da8121e6a198edafa582,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698799565070871120,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9kk6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dfdf36a-3ee4-4786-9d57-131962bc4c88,},Annotations:map[string]string{io.kubernetes.container.hash: c1662d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aec0b2ce981a57370f48f466371293b2248977ce9ae2fb149919152c16b9c4e,PodSandboxId:1e88b38c03400de264006beb84181038bc5e4129722c9b8ca90514ebe8d7db17,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698799563190062550,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f45gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4ae73e-212e-4a24-a6d7-25ab15186ca8,},Annotations:map[string]string{io.kubernetes.container.hash: cb76dce7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae6bfaaaa10e98d835321734c5099f843e5be624c18735b22503fe925b97bca9,PodSandboxId:26d8740b647e371de08a4bfd9b19b282d749553586104e2fac88edb86ebd66cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698799562624293861,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58defa582c316c79b3d8f3f2b1f06708,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 1ea39117,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fb41d2e45f09c62cd26743a54a7e781ea89ce7b2a8b5f5b571901aead7930ea,PodSandboxId:61e80363c9c256c8a40ed4ad60191fc287f357fc8754eeaa6479038db9bd5ca7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698799562328064964,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1ae1fffd851eec445a886d4c3ef691,},Annotations:map[string]string{
io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b6d5871e2f8d441311be04de5123e1c64372b69980840e5e3cc24e341444ac5,PodSandboxId:e395e77f128231fdcd5b8c73723f3f4bf20fe80886a3f3c3d55c96fdea355cd7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698799562090540855,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeefc617d942f43c20a82588725d37c1,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 65724505,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8c4a61c641e01e33282f1d1ce144e3abbd34a8d1849b85d43f47b4c3c2db3d,PodSandboxId:f9f2060744a5c3e8fc170304e2f244ae4ba70d4f9ad82d2aa8deffed85f3e3e1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698799561737526432,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2800027ab3fa7a2334199a818fc36bcd,},Annotations:map[string]str
ing{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6064425f28dda71209a4dd39d96349d4c310d45fc4827fb223f3e68b9298be6,PodSandboxId:0b85b13b5614c0350341777667605ed6b87a309555353ccb8e1e825f24a0cb59,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1698799546458580184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f45gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4ae73e-212e-4a24-a6d7-25ab15186ca8,},Annotations:map[string]string{io.kubernetes.container.hash: cb76dce7,io.kubernetes.container.p
orts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ef0c9df2cfa1eddf797e9cf626df84a44695ddca78bc29dbaca6cc572a2bd1f,PodSandboxId:9904570ba6b30d88b1dc8955c5f2bd1d2aa82a05acf0398236de5990b37999e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,State:CONTAINER_EXITED,CreatedAt:1698799545480043911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9kk6,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 2dfdf36a-3ee4-4786-9d57-131962bc4c88,},Annotations:map[string]string{io.kubernetes.container.hash: c1662d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83ba5cf1f9ad384bdf7e669ae56276f532bedc64d0f61ac07a53d1079a0c29e3,PodSandboxId:e41e7c6158c0fb0712f236f5e313b1190f72fa40ca1c4c17d54be87a2e414e2e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,State:CONTAINER_EXITED,CreatedAt:1698799523651480577,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeefc617d942f43c20a82588725d37
c1,},Annotations:map[string]string{io.kubernetes.container.hash: 65724505,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1201aea9235f1fdc9c9b623c75a375f4cc07d43a0bacac462dbb5ab5d01dded9,PodSandboxId:fe9d4e12a312ed47deb9004a29184ca768e874a96e4d2f79f5cc59b33dfa38ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,State:CONTAINER_EXITED,CreatedAt:1698799523480346762,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1ae1fffd851eec445a886d4c3ef691,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cab178bec38f9a3c5e477d390c044d5d08cf63740f84beb5df8499a8074bad2b,PodSandboxId:31d2fa936ea88dd0410c22fb1b59918af1e9f1d615bdaf51db01999c5f006a81,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1698799523427456941,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2800027ab3fa7a2334199a818fc36bcd,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa2851400cfbff707064071609e4b6e34a1316788b5409e084e8d29882ab2e45,PodSandboxId:1dfcae9ac6b7021a6f1121e6e3634b5eba673ead248fe685ca39029e7c408eb2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1698799523245479301,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58defa582c316c79b3d8f3f2b1f06708,},Annotations:map[string]string{io.kubernetes.container.hash: 1ea39117,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e444e1c3-7fc3-4b9c-affe-01c46d12a965 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:46:26 pause-582989 crio[2357]: time="2023-11-01 00:46:26.962354514Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c53bdf8c-6f89-496e-b2e2-62725933ccd4 name=/runtime.v1.RuntimeService/Version
	Nov 01 00:46:26 pause-582989 crio[2357]: time="2023-11-01 00:46:26.962486237Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c53bdf8c-6f89-496e-b2e2-62725933ccd4 name=/runtime.v1.RuntimeService/Version
	Nov 01 00:46:26 pause-582989 crio[2357]: time="2023-11-01 00:46:26.963952773Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=4f12e268-1c7b-4ab7-bf7e-8151431bdd0b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 00:46:26 pause-582989 crio[2357]: time="2023-11-01 00:46:26.964364740Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698799586964351326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=4f12e268-1c7b-4ab7-bf7e-8151431bdd0b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 00:46:26 pause-582989 crio[2357]: time="2023-11-01 00:46:26.965828328Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ffff4525-91e9-430b-835b-eb67ef026ff7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:46:26 pause-582989 crio[2357]: time="2023-11-01 00:46:26.965911575Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ffff4525-91e9-430b-835b-eb67ef026ff7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:46:26 pause-582989 crio[2357]: time="2023-11-01 00:46:26.966172357Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cda943c34c9fa9fa8ba74e1ba2e3586a69606f53cb45fd92e2fd4954a82e6677,PodSandboxId:8fd2822663467c1a5ed22c5a835b36c705604ab0ebd5da8121e6a198edafa582,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698799565070871120,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9kk6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dfdf36a-3ee4-4786-9d57-131962bc4c88,},Annotations:map[string]string{io.kubernetes.container.hash: c1662d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aec0b2ce981a57370f48f466371293b2248977ce9ae2fb149919152c16b9c4e,PodSandboxId:1e88b38c03400de264006beb84181038bc5e4129722c9b8ca90514ebe8d7db17,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698799563190062550,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f45gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4ae73e-212e-4a24-a6d7-25ab15186ca8,},Annotations:map[string]string{io.kubernetes.container.hash: cb76dce7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae6bfaaaa10e98d835321734c5099f843e5be624c18735b22503fe925b97bca9,PodSandboxId:26d8740b647e371de08a4bfd9b19b282d749553586104e2fac88edb86ebd66cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698799562624293861,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58defa582c316c79b3d8f3f2b1f06708,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 1ea39117,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fb41d2e45f09c62cd26743a54a7e781ea89ce7b2a8b5f5b571901aead7930ea,PodSandboxId:61e80363c9c256c8a40ed4ad60191fc287f357fc8754eeaa6479038db9bd5ca7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698799562328064964,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1ae1fffd851eec445a886d4c3ef691,},Annotations:map[string]string{
io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b6d5871e2f8d441311be04de5123e1c64372b69980840e5e3cc24e341444ac5,PodSandboxId:e395e77f128231fdcd5b8c73723f3f4bf20fe80886a3f3c3d55c96fdea355cd7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698799562090540855,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeefc617d942f43c20a82588725d37c1,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 65724505,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8c4a61c641e01e33282f1d1ce144e3abbd34a8d1849b85d43f47b4c3c2db3d,PodSandboxId:f9f2060744a5c3e8fc170304e2f244ae4ba70d4f9ad82d2aa8deffed85f3e3e1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698799561737526432,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2800027ab3fa7a2334199a818fc36bcd,},Annotations:map[string]str
ing{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6064425f28dda71209a4dd39d96349d4c310d45fc4827fb223f3e68b9298be6,PodSandboxId:0b85b13b5614c0350341777667605ed6b87a309555353ccb8e1e825f24a0cb59,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1698799546458580184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f45gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4ae73e-212e-4a24-a6d7-25ab15186ca8,},Annotations:map[string]string{io.kubernetes.container.hash: cb76dce7,io.kubernetes.container.p
orts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ef0c9df2cfa1eddf797e9cf626df84a44695ddca78bc29dbaca6cc572a2bd1f,PodSandboxId:9904570ba6b30d88b1dc8955c5f2bd1d2aa82a05acf0398236de5990b37999e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,State:CONTAINER_EXITED,CreatedAt:1698799545480043911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9kk6,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 2dfdf36a-3ee4-4786-9d57-131962bc4c88,},Annotations:map[string]string{io.kubernetes.container.hash: c1662d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83ba5cf1f9ad384bdf7e669ae56276f532bedc64d0f61ac07a53d1079a0c29e3,PodSandboxId:e41e7c6158c0fb0712f236f5e313b1190f72fa40ca1c4c17d54be87a2e414e2e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,State:CONTAINER_EXITED,CreatedAt:1698799523651480577,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeefc617d942f43c20a82588725d37
c1,},Annotations:map[string]string{io.kubernetes.container.hash: 65724505,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1201aea9235f1fdc9c9b623c75a375f4cc07d43a0bacac462dbb5ab5d01dded9,PodSandboxId:fe9d4e12a312ed47deb9004a29184ca768e874a96e4d2f79f5cc59b33dfa38ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,State:CONTAINER_EXITED,CreatedAt:1698799523480346762,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1ae1fffd851eec445a886d4c3ef691,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cab178bec38f9a3c5e477d390c044d5d08cf63740f84beb5df8499a8074bad2b,PodSandboxId:31d2fa936ea88dd0410c22fb1b59918af1e9f1d615bdaf51db01999c5f006a81,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1698799523427456941,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2800027ab3fa7a2334199a818fc36bcd,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa2851400cfbff707064071609e4b6e34a1316788b5409e084e8d29882ab2e45,PodSandboxId:1dfcae9ac6b7021a6f1121e6e3634b5eba673ead248fe685ca39029e7c408eb2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1698799523245479301,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58defa582c316c79b3d8f3f2b1f06708,},Annotations:map[string]string{io.kubernetes.container.hash: 1ea39117,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ffff4525-91e9-430b-835b-eb67ef026ff7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:46:27 pause-582989 crio[2357]: time="2023-11-01 00:46:27.010354488Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=05dfa6d2-3ef2-4a22-adec-3b3bea1d265b name=/runtime.v1.RuntimeService/Version
	Nov 01 00:46:27 pause-582989 crio[2357]: time="2023-11-01 00:46:27.010436513Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=05dfa6d2-3ef2-4a22-adec-3b3bea1d265b name=/runtime.v1.RuntimeService/Version
	Nov 01 00:46:27 pause-582989 crio[2357]: time="2023-11-01 00:46:27.010631564Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=3735462a-3408-4ac1-aa01-abe9e4c62304 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 01 00:46:27 pause-582989 crio[2357]: time="2023-11-01 00:46:27.011125095Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:1e88b38c03400de264006beb84181038bc5e4129722c9b8ca90514ebe8d7db17,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-f45gz,Uid:4b4ae73e-212e-4a24-a6d7-25ab15186ca8,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1698799561034353513,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-f45gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4ae73e-212e-4a24-a6d7-25ab15186ca8,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-01T00:45:44.598517535Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e395e77f128231fdcd5b8c73723f3f4bf20fe80886a3f3c3d55c96fdea355cd7,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-582989,Uid:eeefc617d942f43c20a82588725d37c1,Namespace:kube-system,
Attempt:2,},State:SANDBOX_READY,CreatedAt:1698799561002278410,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeefc617d942f43c20a82588725d37c1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.83.166:8443,kubernetes.io/config.hash: eeefc617d942f43c20a82588725d37c1,kubernetes.io/config.seen: 2023-11-01T00:45:31.358899195Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8fd2822663467c1a5ed22c5a835b36c705604ab0ebd5da8121e6a198edafa582,Metadata:&PodSandboxMetadata{Name:kube-proxy-l9kk6,Uid:2dfdf36a-3ee4-4786-9d57-131962bc4c88,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1698799560957095357,Labels:map[string]string{controller-revision-hash: dffc744c9,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-l9kk6,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 2dfdf36a-3ee4-4786-9d57-131962bc4c88,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-01T00:45:44.397247363Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:61e80363c9c256c8a40ed4ad60191fc287f357fc8754eeaa6479038db9bd5ca7,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-582989,Uid:6c1ae1fffd851eec445a886d4c3ef691,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1698799560928902292,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1ae1fffd851eec445a886d4c3ef691,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6c1ae1fffd851eec445a886d4c3ef691,kubernetes.io/config.seen: 2023-11-01T00:45:31.358902294Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:26d8740b647e371de08a4bfd
9b19b282d749553586104e2fac88edb86ebd66cc,Metadata:&PodSandboxMetadata{Name:etcd-pause-582989,Uid:58defa582c316c79b3d8f3f2b1f06708,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1698799560922979000,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58defa582c316c79b3d8f3f2b1f06708,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.83.166:2379,kubernetes.io/config.hash: 58defa582c316c79b3d8f3f2b1f06708,kubernetes.io/config.seen: 2023-11-01T00:45:31.358894310Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f9f2060744a5c3e8fc170304e2f244ae4ba70d4f9ad82d2aa8deffed85f3e3e1,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-582989,Uid:2800027ab3fa7a2334199a818fc36bcd,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1698799560837731097,Labels:map[string]stri
ng{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2800027ab3fa7a2334199a818fc36bcd,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2800027ab3fa7a2334199a818fc36bcd,kubernetes.io/config.seen: 2023-11-01T00:45:31.358900728Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2a5a9f64e392e71f23e2f8d636a7b8332fd7129cf1cef0414a38e5f445008dc6,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-f45gz,Uid:4b4ae73e-212e-4a24-a6d7-25ab15186ca8,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1698799557956541098,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-f45gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4ae73e-212e-4a24-a6d7-25ab15186ca8,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/
config.seen: 2023-11-01T00:45:44.598517535Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2f1605d92fdab9bab77b7e3535f0de0769df84ef8d5f1f36b7f5248b7fc20523,Metadata:&PodSandboxMetadata{Name:etcd-pause-582989,Uid:58defa582c316c79b3d8f3f2b1f06708,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1698799557886427477,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58defa582c316c79b3d8f3f2b1f06708,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.83.166:2379,kubernetes.io/config.hash: 58defa582c316c79b3d8f3f2b1f06708,kubernetes.io/config.seen: 2023-11-01T00:45:31.358894310Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:288ca3a4d5af9189582424e258f73b69ce95709af369e2c94200064225472c12,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-582989,Uid:
2800027ab3fa7a2334199a818fc36bcd,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1698799557871594993,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2800027ab3fa7a2334199a818fc36bcd,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2800027ab3fa7a2334199a818fc36bcd,kubernetes.io/config.seen: 2023-11-01T00:45:31.358900728Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:06a1a64dd52b65484827de8483fedc099c22799f946c7791e841d96918ef3b35,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-582989,Uid:eeefc617d942f43c20a82588725d37c1,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1698799557824402563,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-582989,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: eeefc617d942f43c20a82588725d37c1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.83.166:8443,kubernetes.io/config.hash: eeefc617d942f43c20a82588725d37c1,kubernetes.io/config.seen: 2023-11-01T00:45:31.358899195Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7a486ebfbadaf73959d212e236f258ca733be27285f0bb18593257f306d463d7,Metadata:&PodSandboxMetadata{Name:kube-proxy-l9kk6,Uid:2dfdf36a-3ee4-4786-9d57-131962bc4c88,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1698799557748717977,Labels:map[string]string{controller-revision-hash: dffc744c9,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-l9kk6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dfdf36a-3ee4-4786-9d57-131962bc4c88,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-01T00:45:44.397247363Z,kubernetes
.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a809662ae1ca8168a9cdd42ed5b07fb545018c1f66640bd3148ec9f5c20aa8ce,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-582989,Uid:6c1ae1fffd851eec445a886d4c3ef691,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1698799557706073771,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1ae1fffd851eec445a886d4c3ef691,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6c1ae1fffd851eec445a886d4c3ef691,kubernetes.io/config.seen: 2023-11-01T00:45:31.358902294Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0b85b13b5614c0350341777667605ed6b87a309555353ccb8e1e825f24a0cb59,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-f45gz,Uid:4b4ae73e-212e-4a24-a6d7-25ab15186ca8,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1698799545
835878218,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-f45gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4ae73e-212e-4a24-a6d7-25ab15186ca8,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-01T00:45:44.598517535Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9904570ba6b30d88b1dc8955c5f2bd1d2aa82a05acf0398236de5990b37999e3,Metadata:&PodSandboxMetadata{Name:kube-proxy-l9kk6,Uid:2dfdf36a-3ee4-4786-9d57-131962bc4c88,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1698799544735984013,Labels:map[string]string{controller-revision-hash: dffc744c9,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-l9kk6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dfdf36a-3ee4-4786-9d57-131962bc4c88,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-01T0
0:45:44.397247363Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e41e7c6158c0fb0712f236f5e313b1190f72fa40ca1c4c17d54be87a2e414e2e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-582989,Uid:eeefc617d942f43c20a82588725d37c1,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1698799522748630466,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeefc617d942f43c20a82588725d37c1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.83.166:8443,kubernetes.io/config.hash: eeefc617d942f43c20a82588725d37c1,kubernetes.io/config.seen: 2023-11-01T00:45:22.131880452Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:31d2fa936ea88dd0410c22fb1b59918af1e9f1d615bdaf51db01999c5f006a81,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause
-582989,Uid:2800027ab3fa7a2334199a818fc36bcd,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1698799522740494192,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2800027ab3fa7a2334199a818fc36bcd,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2800027ab3fa7a2334199a818fc36bcd,kubernetes.io/config.seen: 2023-11-01T00:45:22.131872154Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1dfcae9ac6b7021a6f1121e6e3634b5eba673ead248fe685ca39029e7c408eb2,Metadata:&PodSandboxMetadata{Name:etcd-pause-582989,Uid:58defa582c316c79b3d8f3f2b1f06708,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1698799522689392541,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-582989,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 58defa582c316c79b3d8f3f2b1f06708,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.83.166:2379,kubernetes.io/config.hash: 58defa582c316c79b3d8f3f2b1f06708,kubernetes.io/config.seen: 2023-11-01T00:45:22.131878903Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fe9d4e12a312ed47deb9004a29184ca768e874a96e4d2f79f5cc59b33dfa38ee,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-582989,Uid:6c1ae1fffd851eec445a886d4c3ef691,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1698799522682517502,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1ae1fffd851eec445a886d4c3ef691,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6c1ae1fffd851eec445a886d4c3ef691,kubernetes.io/config.seen: 2023-11-01T00:45:22.131877404
Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=3735462a-3408-4ac1-aa01-abe9e4c62304 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 01 00:46:27 pause-582989 crio[2357]: time="2023-11-01 00:46:27.012508428Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=96953a70-040e-49c4-b888-adcd87d7133d name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:46:27 pause-582989 crio[2357]: time="2023-11-01 00:46:27.012583285Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=96953a70-040e-49c4-b888-adcd87d7133d name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:46:27 pause-582989 crio[2357]: time="2023-11-01 00:46:27.012915470Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cda943c34c9fa9fa8ba74e1ba2e3586a69606f53cb45fd92e2fd4954a82e6677,PodSandboxId:8fd2822663467c1a5ed22c5a835b36c705604ab0ebd5da8121e6a198edafa582,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698799565070871120,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9kk6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dfdf36a-3ee4-4786-9d57-131962bc4c88,},Annotations:map[string]string{io.kubernetes.container.hash: c1662d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aec0b2ce981a57370f48f466371293b2248977ce9ae2fb149919152c16b9c4e,PodSandboxId:1e88b38c03400de264006beb84181038bc5e4129722c9b8ca90514ebe8d7db17,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698799563190062550,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f45gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4ae73e-212e-4a24-a6d7-25ab15186ca8,},Annotations:map[string]string{io.kubernetes.container.hash: cb76dce7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae6bfaaaa10e98d835321734c5099f843e5be624c18735b22503fe925b97bca9,PodSandboxId:26d8740b647e371de08a4bfd9b19b282d749553586104e2fac88edb86ebd66cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698799562624293861,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58defa582c316c79b3d8f3f2b1f06708,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 1ea39117,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fb41d2e45f09c62cd26743a54a7e781ea89ce7b2a8b5f5b571901aead7930ea,PodSandboxId:61e80363c9c256c8a40ed4ad60191fc287f357fc8754eeaa6479038db9bd5ca7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698799562328064964,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1ae1fffd851eec445a886d4c3ef691,},Annotations:map[string]string{
io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b6d5871e2f8d441311be04de5123e1c64372b69980840e5e3cc24e341444ac5,PodSandboxId:e395e77f128231fdcd5b8c73723f3f4bf20fe80886a3f3c3d55c96fdea355cd7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698799562090540855,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeefc617d942f43c20a82588725d37c1,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 65724505,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8c4a61c641e01e33282f1d1ce144e3abbd34a8d1849b85d43f47b4c3c2db3d,PodSandboxId:f9f2060744a5c3e8fc170304e2f244ae4ba70d4f9ad82d2aa8deffed85f3e3e1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698799561737526432,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2800027ab3fa7a2334199a818fc36bcd,},Annotations:map[string]str
ing{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6064425f28dda71209a4dd39d96349d4c310d45fc4827fb223f3e68b9298be6,PodSandboxId:0b85b13b5614c0350341777667605ed6b87a309555353ccb8e1e825f24a0cb59,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1698799546458580184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f45gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4ae73e-212e-4a24-a6d7-25ab15186ca8,},Annotations:map[string]string{io.kubernetes.container.hash: cb76dce7,io.kubernetes.container.p
orts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ef0c9df2cfa1eddf797e9cf626df84a44695ddca78bc29dbaca6cc572a2bd1f,PodSandboxId:9904570ba6b30d88b1dc8955c5f2bd1d2aa82a05acf0398236de5990b37999e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,State:CONTAINER_EXITED,CreatedAt:1698799545480043911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9kk6,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 2dfdf36a-3ee4-4786-9d57-131962bc4c88,},Annotations:map[string]string{io.kubernetes.container.hash: c1662d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83ba5cf1f9ad384bdf7e669ae56276f532bedc64d0f61ac07a53d1079a0c29e3,PodSandboxId:e41e7c6158c0fb0712f236f5e313b1190f72fa40ca1c4c17d54be87a2e414e2e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,State:CONTAINER_EXITED,CreatedAt:1698799523651480577,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeefc617d942f43c20a82588725d37
c1,},Annotations:map[string]string{io.kubernetes.container.hash: 65724505,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1201aea9235f1fdc9c9b623c75a375f4cc07d43a0bacac462dbb5ab5d01dded9,PodSandboxId:fe9d4e12a312ed47deb9004a29184ca768e874a96e4d2f79f5cc59b33dfa38ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,State:CONTAINER_EXITED,CreatedAt:1698799523480346762,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1ae1fffd851eec445a886d4c3ef691,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cab178bec38f9a3c5e477d390c044d5d08cf63740f84beb5df8499a8074bad2b,PodSandboxId:31d2fa936ea88dd0410c22fb1b59918af1e9f1d615bdaf51db01999c5f006a81,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1698799523427456941,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2800027ab3fa7a2334199a818fc36bcd,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa2851400cfbff707064071609e4b6e34a1316788b5409e084e8d29882ab2e45,PodSandboxId:1dfcae9ac6b7021a6f1121e6e3634b5eba673ead248fe685ca39029e7c408eb2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1698799523245479301,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58defa582c316c79b3d8f3f2b1f06708,},Annotations:map[string]string{io.kubernetes.container.hash: 1ea39117,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=96953a70-040e-49c4-b888-adcd87d7133d name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:46:27 pause-582989 crio[2357]: time="2023-11-01 00:46:27.014288144Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=bd512e4d-82bd-4972-8c93-8c5d2eab081d name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 00:46:27 pause-582989 crio[2357]: time="2023-11-01 00:46:27.014753707Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698799587014645719,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=bd512e4d-82bd-4972-8c93-8c5d2eab081d name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 00:46:27 pause-582989 crio[2357]: time="2023-11-01 00:46:27.015599280Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=510248fc-a116-492c-b7c3-e5b610aa2ca0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:46:27 pause-582989 crio[2357]: time="2023-11-01 00:46:27.015674566Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=510248fc-a116-492c-b7c3-e5b610aa2ca0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 00:46:27 pause-582989 crio[2357]: time="2023-11-01 00:46:27.016078095Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cda943c34c9fa9fa8ba74e1ba2e3586a69606f53cb45fd92e2fd4954a82e6677,PodSandboxId:8fd2822663467c1a5ed22c5a835b36c705604ab0ebd5da8121e6a198edafa582,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698799565070871120,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9kk6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dfdf36a-3ee4-4786-9d57-131962bc4c88,},Annotations:map[string]string{io.kubernetes.container.hash: c1662d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aec0b2ce981a57370f48f466371293b2248977ce9ae2fb149919152c16b9c4e,PodSandboxId:1e88b38c03400de264006beb84181038bc5e4129722c9b8ca90514ebe8d7db17,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698799563190062550,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f45gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4ae73e-212e-4a24-a6d7-25ab15186ca8,},Annotations:map[string]string{io.kubernetes.container.hash: cb76dce7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae6bfaaaa10e98d835321734c5099f843e5be624c18735b22503fe925b97bca9,PodSandboxId:26d8740b647e371de08a4bfd9b19b282d749553586104e2fac88edb86ebd66cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698799562624293861,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58defa582c316c79b3d8f3f2b1f06708,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 1ea39117,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fb41d2e45f09c62cd26743a54a7e781ea89ce7b2a8b5f5b571901aead7930ea,PodSandboxId:61e80363c9c256c8a40ed4ad60191fc287f357fc8754eeaa6479038db9bd5ca7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698799562328064964,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1ae1fffd851eec445a886d4c3ef691,},Annotations:map[string]string{
io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b6d5871e2f8d441311be04de5123e1c64372b69980840e5e3cc24e341444ac5,PodSandboxId:e395e77f128231fdcd5b8c73723f3f4bf20fe80886a3f3c3d55c96fdea355cd7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698799562090540855,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeefc617d942f43c20a82588725d37c1,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 65724505,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8c4a61c641e01e33282f1d1ce144e3abbd34a8d1849b85d43f47b4c3c2db3d,PodSandboxId:f9f2060744a5c3e8fc170304e2f244ae4ba70d4f9ad82d2aa8deffed85f3e3e1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698799561737526432,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2800027ab3fa7a2334199a818fc36bcd,},Annotations:map[string]str
ing{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6064425f28dda71209a4dd39d96349d4c310d45fc4827fb223f3e68b9298be6,PodSandboxId:0b85b13b5614c0350341777667605ed6b87a309555353ccb8e1e825f24a0cb59,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1698799546458580184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f45gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4ae73e-212e-4a24-a6d7-25ab15186ca8,},Annotations:map[string]string{io.kubernetes.container.hash: cb76dce7,io.kubernetes.container.p
orts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ef0c9df2cfa1eddf797e9cf626df84a44695ddca78bc29dbaca6cc572a2bd1f,PodSandboxId:9904570ba6b30d88b1dc8955c5f2bd1d2aa82a05acf0398236de5990b37999e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,State:CONTAINER_EXITED,CreatedAt:1698799545480043911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9kk6,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 2dfdf36a-3ee4-4786-9d57-131962bc4c88,},Annotations:map[string]string{io.kubernetes.container.hash: c1662d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83ba5cf1f9ad384bdf7e669ae56276f532bedc64d0f61ac07a53d1079a0c29e3,PodSandboxId:e41e7c6158c0fb0712f236f5e313b1190f72fa40ca1c4c17d54be87a2e414e2e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,State:CONTAINER_EXITED,CreatedAt:1698799523651480577,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeefc617d942f43c20a82588725d37
c1,},Annotations:map[string]string{io.kubernetes.container.hash: 65724505,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1201aea9235f1fdc9c9b623c75a375f4cc07d43a0bacac462dbb5ab5d01dded9,PodSandboxId:fe9d4e12a312ed47deb9004a29184ca768e874a96e4d2f79f5cc59b33dfa38ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,State:CONTAINER_EXITED,CreatedAt:1698799523480346762,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1ae1fffd851eec445a886d4c3ef691,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cab178bec38f9a3c5e477d390c044d5d08cf63740f84beb5df8499a8074bad2b,PodSandboxId:31d2fa936ea88dd0410c22fb1b59918af1e9f1d615bdaf51db01999c5f006a81,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1698799523427456941,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2800027ab3fa7a2334199a818fc36bcd,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa2851400cfbff707064071609e4b6e34a1316788b5409e084e8d29882ab2e45,PodSandboxId:1dfcae9ac6b7021a6f1121e6e3634b5eba673ead248fe685ca39029e7c408eb2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1698799523245479301,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-582989,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58defa582c316c79b3d8f3f2b1f06708,},Annotations:map[string]string{io.kubernetes.container.hash: 1ea39117,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=510248fc-a116-492c-b7c3-e5b610aa2ca0 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	cda943c34c9fa       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   22 seconds ago       Running             kube-proxy                1                   8fd2822663467       kube-proxy-l9kk6
	4aec0b2ce981a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   23 seconds ago       Running             coredns                   1                   1e88b38c03400       coredns-5dd5756b68-f45gz
	ae6bfaaaa10e9       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   24 seconds ago       Running             etcd                      1                   26d8740b647e3       etcd-pause-582989
	6fb41d2e45f09       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   24 seconds ago       Running             kube-scheduler            1                   61e80363c9c25       kube-scheduler-pause-582989
	3b6d5871e2f8d       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   25 seconds ago       Running             kube-apiserver            1                   e395e77f12823       kube-apiserver-pause-582989
	5e8c4a61c641e       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   25 seconds ago       Running             kube-controller-manager   1                   f9f2060744a5c       kube-controller-manager-pause-582989
	e6064425f28dd       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   40 seconds ago       Exited              coredns                   0                   0b85b13b5614c       coredns-5dd5756b68-f45gz
	5ef0c9df2cfa1       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   41 seconds ago       Exited              kube-proxy                0                   9904570ba6b30       kube-proxy-l9kk6
	83ba5cf1f9ad3       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   About a minute ago   Exited              kube-apiserver            0                   e41e7c6158c0f       kube-apiserver-pause-582989
	1201aea9235f1       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   About a minute ago   Exited              kube-scheduler            0                   fe9d4e12a312e       kube-scheduler-pause-582989
	cab178bec38f9       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   About a minute ago   Exited              kube-controller-manager   0                   31d2fa936ea88       kube-controller-manager-pause-582989
	aa2851400cfbf       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   About a minute ago   Exited              etcd                      0                   1dfcae9ac6b70       etcd-pause-582989
	
	* 
	* ==> coredns [4aec0b2ce981a57370f48f466371293b2248977ce9ae2fb149919152c16b9c4e] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 347fb4f25cc546215231b2e9ef34a7838489408c50ad1d77e38b06de967dd388dc540a0db2692259640c7998323f3763426b7a7e73fad2aa89cebddf27cf7c94
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:56447 - 13752 "HINFO IN 826386391169473441.4398837603073810538. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.00915945s
	
	* 
	* ==> coredns [e6064425f28dda71209a4dd39d96349d4c310d45fc4827fb223f3e68b9298be6] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 347fb4f25cc546215231b2e9ef34a7838489408c50ad1d77e38b06de967dd388dc540a0db2692259640c7998323f3763426b7a7e73fad2aa89cebddf27cf7c94
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60087 - 48228 "HINFO IN 6161423461098319785.3218909952744732747. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017768473s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-582989
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-582989
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9
	                    minikube.k8s.io/name=pause-582989
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_01T00_45_31_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Nov 2023 00:45:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-582989
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Nov 2023 00:46:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Nov 2023 00:46:12 +0000   Wed, 01 Nov 2023 00:45:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Nov 2023 00:46:12 +0000   Wed, 01 Nov 2023 00:45:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Nov 2023 00:46:12 +0000   Wed, 01 Nov 2023 00:45:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Nov 2023 00:46:12 +0000   Wed, 01 Nov 2023 00:45:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.166
	  Hostname:    pause-582989
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 6bc7a7e21aa14f7798ac5a787d112281
	  System UUID:                6bc7a7e2-1aa1-4f77-98ac-5a787d112281
	  Boot ID:                    a2e2fd9a-cba3-46f5-b2db-5d98a9cb887a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-f45gz                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     43s
	  kube-system                 etcd-pause-582989                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         56s
	  kube-system                 kube-apiserver-pause-582989             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	  kube-system                 kube-controller-manager-pause-582989    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	  kube-system                 kube-proxy-l9kk6                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  kube-system                 kube-scheduler-pause-582989             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 41s                kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  NodeAllocatableEnforced  65s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  65s (x8 over 65s)  kubelet          Node pause-582989 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    65s (x8 over 65s)  kubelet          Node pause-582989 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     65s (x7 over 65s)  kubelet          Node pause-582989 status is now: NodeHasSufficientPID
	  Normal  Starting                 65s                kubelet          Starting kubelet.
	  Normal  Starting                 56s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s                kubelet          Node pause-582989 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s                kubelet          Node pause-582989 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s                kubelet          Node pause-582989 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  56s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                56s                kubelet          Node pause-582989 status is now: NodeReady
	  Normal  RegisteredNode           45s                node-controller  Node pause-582989 event: Registered Node pause-582989 in Controller
	  Normal  RegisteredNode           7s                 node-controller  Node pause-582989 event: Registered Node pause-582989 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov 1 00:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071749] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.700999] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.216540] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.158791] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Nov 1 00:45] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000004] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.107093] systemd-fstab-generator[644]: Ignoring "noauto" for root device
	[  +0.117025] systemd-fstab-generator[655]: Ignoring "noauto" for root device
	[  +0.151792] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.119944] systemd-fstab-generator[679]: Ignoring "noauto" for root device
	[  +0.273706] systemd-fstab-generator[703]: Ignoring "noauto" for root device
	[  +9.774989] systemd-fstab-generator[927]: Ignoring "noauto" for root device
	[  +9.808715] systemd-fstab-generator[1260]: Ignoring "noauto" for root device
	[ +25.952781] systemd-fstab-generator[2050]: Ignoring "noauto" for root device
	[  +0.156903] systemd-fstab-generator[2061]: Ignoring "noauto" for root device
	[  +0.173148] systemd-fstab-generator[2074]: Ignoring "noauto" for root device
	[  +0.128713] systemd-fstab-generator[2085]: Ignoring "noauto" for root device
	[  +0.303134] kauditd_printk_skb: 23 callbacks suppressed
	[  +0.344554] systemd-fstab-generator[2245]: Ignoring "noauto" for root device
	[Nov 1 00:46] kauditd_printk_skb: 8 callbacks suppressed
	
	* 
	* ==> etcd [aa2851400cfbff707064071609e4b6e34a1316788b5409e084e8d29882ab2e45] <==
	* {"level":"warn","ts":"2023-11-01T00:45:44.331207Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-01T00:45:43.928649Z","time spent":"402.533179ms","remote":"127.0.0.1:48898","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":164,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/serviceaccounts/kube-public/default\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-public/default\" value_size:112 >> failure:<>"}
	{"level":"info","ts":"2023-11-01T00:45:44.331368Z","caller":"traceutil/trace.go:171","msg":"trace[1765063127] transaction","detail":"{read_only:false; response_revision:347; number_of_response:1; }","duration":"402.604837ms","start":"2023-11-01T00:45:43.928753Z","end":"2023-11-01T00:45:44.331357Z","steps":["trace[1765063127] 'process raft request'  (duration: 401.507387ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-01T00:45:44.331405Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-01T00:45:43.928748Z","time spent":"402.638395ms","remote":"127.0.0.1:48962","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3620,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-5dd5756b68\" mod_revision:0 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-5dd5756b68\" value_size:3560 >> failure:<>"}
	{"level":"info","ts":"2023-11-01T00:45:44.331548Z","caller":"traceutil/trace.go:171","msg":"trace[1379998469] transaction","detail":"{read_only:false; response_revision:348; number_of_response:1; }","duration":"400.164616ms","start":"2023-11-01T00:45:43.931373Z","end":"2023-11-01T00:45:44.331538Z","steps":["trace[1379998469] 'process raft request'  (duration: 398.919195ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-01T00:45:44.331593Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-01T00:45:43.931294Z","time spent":"400.274907ms","remote":"127.0.0.1:48964","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2124,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/controllerrevisions/kube-system/kube-proxy-dffc744c9\" mod_revision:0 > success:<request_put:<key:\"/registry/controllerrevisions/kube-system/kube-proxy-dffc744c9\" value_size:2054 >> failure:<>"}
	{"level":"warn","ts":"2023-11-01T00:45:44.33173Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.965344ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:612"}
	{"level":"info","ts":"2023-11-01T00:45:44.331758Z","caller":"traceutil/trace.go:171","msg":"trace[57907702] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:348; }","duration":"189.996086ms","start":"2023-11-01T00:45:44.141753Z","end":"2023-11-01T00:45:44.33175Z","steps":["trace[57907702] 'agreement among raft nodes before linearized reading'  (duration: 189.925955ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-01T00:45:44.331964Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"289.26651ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:3193"}
	{"level":"info","ts":"2023-11-01T00:45:44.332019Z","caller":"traceutil/trace.go:171","msg":"trace[753113455] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:348; }","duration":"289.323431ms","start":"2023-11-01T00:45:44.042687Z","end":"2023-11-01T00:45:44.33201Z","steps":["trace[753113455] 'agreement among raft nodes before linearized reading'  (duration: 289.236943ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-01T00:45:44.33458Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"405.711096ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-01T00:45:44.334652Z","caller":"traceutil/trace.go:171","msg":"trace[279390574] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:348; }","duration":"405.788775ms","start":"2023-11-01T00:45:43.928851Z","end":"2023-11-01T00:45:44.334639Z","steps":["trace[279390574] 'agreement among raft nodes before linearized reading'  (duration: 405.675299ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-01T00:45:44.334696Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-01T00:45:43.928775Z","time spent":"405.902438ms","remote":"127.0.0.1:48852","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2023-11-01T00:45:44.329718Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"413.243757ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/edit\" ","response":"range_response_count:1 size:2205"}
	{"level":"info","ts":"2023-11-01T00:45:44.335119Z","caller":"traceutil/trace.go:171","msg":"trace[1534831500] range","detail":"{range_begin:/registry/clusterroles/edit; range_end:; response_count:1; response_revision:344; }","duration":"418.646734ms","start":"2023-11-01T00:45:43.916457Z","end":"2023-11-01T00:45:44.335104Z","steps":["trace[1534831500] 'agreement among raft nodes before linearized reading'  (duration: 413.12707ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-01T00:45:44.335159Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-01T00:45:43.916449Z","time spent":"418.697001ms","remote":"127.0.0.1:48930","response type":"/etcdserverpb.KV/Range","request count":0,"request size":29,"response count":1,"response size":2228,"request content":"key:\"/registry/clusterroles/edit\" "}
	{"level":"info","ts":"2023-11-01T00:45:50.83517Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-11-01T00:45:50.835262Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-582989","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.166:2380"],"advertise-client-urls":["https://192.168.83.166:2379"]}
	{"level":"warn","ts":"2023-11-01T00:45:50.835435Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-11-01T00:45:50.835586Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-11-01T00:45:50.887912Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.83.166:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-11-01T00:45:50.888011Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.83.166:2379: use of closed network connection"}
	{"level":"info","ts":"2023-11-01T00:45:50.888113Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"14b81da0c68bfbd7","current-leader-member-id":"14b81da0c68bfbd7"}
	{"level":"info","ts":"2023-11-01T00:45:50.894405Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.83.166:2380"}
	{"level":"info","ts":"2023-11-01T00:45:50.894567Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.83.166:2380"}
	{"level":"info","ts":"2023-11-01T00:45:50.894612Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-582989","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.166:2380"],"advertise-client-urls":["https://192.168.83.166:2379"]}
	
	* 
	* ==> etcd [ae6bfaaaa10e98d835321734c5099f843e5be624c18735b22503fe925b97bca9] <==
	* {"level":"info","ts":"2023-11-01T00:46:04.739248Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-01T00:46:04.739276Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-01T00:46:04.739557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"14b81da0c68bfbd7 switched to configuration voters=(1492975852836355031)"}
	{"level":"info","ts":"2023-11-01T00:46:04.739656Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a30fd0400b31c5f5","local-member-id":"14b81da0c68bfbd7","added-peer-id":"14b81da0c68bfbd7","added-peer-peer-urls":["https://192.168.83.166:2380"]}
	{"level":"info","ts":"2023-11-01T00:46:04.739865Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a30fd0400b31c5f5","local-member-id":"14b81da0c68bfbd7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T00:46:04.739924Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T00:46:04.741671Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-01T00:46:04.741993Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"14b81da0c68bfbd7","initial-advertise-peer-urls":["https://192.168.83.166:2380"],"listen-peer-urls":["https://192.168.83.166:2380"],"advertise-client-urls":["https://192.168.83.166:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.83.166:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-01T00:46:04.743191Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.83.166:2380"}
	{"level":"info","ts":"2023-11-01T00:46:04.743615Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.83.166:2380"}
	{"level":"info","ts":"2023-11-01T00:46:04.743545Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-01T00:46:06.404061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"14b81da0c68bfbd7 is starting a new election at term 2"}
	{"level":"info","ts":"2023-11-01T00:46:06.404204Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"14b81da0c68bfbd7 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-11-01T00:46:06.404285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"14b81da0c68bfbd7 received MsgPreVoteResp from 14b81da0c68bfbd7 at term 2"}
	{"level":"info","ts":"2023-11-01T00:46:06.404367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"14b81da0c68bfbd7 became candidate at term 3"}
	{"level":"info","ts":"2023-11-01T00:46:06.404459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"14b81da0c68bfbd7 received MsgVoteResp from 14b81da0c68bfbd7 at term 3"}
	{"level":"info","ts":"2023-11-01T00:46:06.40449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"14b81da0c68bfbd7 became leader at term 3"}
	{"level":"info","ts":"2023-11-01T00:46:06.404578Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 14b81da0c68bfbd7 elected leader 14b81da0c68bfbd7 at term 3"}
	{"level":"info","ts":"2023-11-01T00:46:06.407862Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"14b81da0c68bfbd7","local-member-attributes":"{Name:pause-582989 ClientURLs:[https://192.168.83.166:2379]}","request-path":"/0/members/14b81da0c68bfbd7/attributes","cluster-id":"a30fd0400b31c5f5","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-01T00:46:06.407922Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-01T00:46:06.408347Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-01T00:46:06.408574Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-01T00:46:06.40871Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-01T00:46:06.408975Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-01T00:46:06.410294Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.166:2379"}
	
	* 
	* ==> kernel <==
	*  00:46:27 up 1 min,  0 users,  load average: 1.26, 0.47, 0.17
	Linux pause-582989 5.10.57 #1 SMP Tue Oct 31 22:14:31 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [3b6d5871e2f8d441311be04de5123e1c64372b69980840e5e3cc24e341444ac5] <==
	* I1101 00:46:07.858570       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1101 00:46:07.858602       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1101 00:46:07.858751       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I1101 00:46:07.858838       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I1101 00:46:07.858955       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I1101 00:46:07.858449       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I1101 00:46:07.899493       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1101 00:46:07.899643       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1101 00:46:07.981165       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 00:46:08.024511       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1101 00:46:08.055846       1 shared_informer.go:318] Caches are synced for configmaps
	I1101 00:46:08.057566       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1101 00:46:08.058137       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 00:46:08.058423       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1101 00:46:08.058466       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1101 00:46:08.059161       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1101 00:46:08.059311       1 aggregator.go:166] initial CRD sync complete...
	I1101 00:46:08.059340       1 autoregister_controller.go:141] Starting autoregister controller
	I1101 00:46:08.059362       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 00:46:08.059384       1 cache.go:39] Caches are synced for autoregister controller
	I1101 00:46:08.059520       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	E1101 00:46:08.110520       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 00:46:08.863558       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 00:46:20.380506       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 00:46:20.442131       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-apiserver [83ba5cf1f9ad384bdf7e669ae56276f532bedc64d0f61ac07a53d1079a0c29e3] <==
	* }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 00:45:50.863976       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 00:45:50.864088       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 00:45:50.868339       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [5e8c4a61c641e01e33282f1d1ce144e3abbd34a8d1849b85d43f47b4c3c2db3d] <==
	* I1101 00:46:20.406288       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1101 00:46:20.406471       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-582989"
	I1101 00:46:20.406601       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1101 00:46:20.406667       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I1101 00:46:20.406734       1 shared_informer.go:318] Caches are synced for namespace
	I1101 00:46:20.407429       1 taint_manager.go:211] "Sending events to api server"
	I1101 00:46:20.407638       1 event.go:307] "Event occurred" object="pause-582989" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-582989 event: Registered Node pause-582989 in Controller"
	I1101 00:46:20.410004       1 shared_informer.go:318] Caches are synced for deployment
	I1101 00:46:20.412483       1 shared_informer.go:318] Caches are synced for cronjob
	I1101 00:46:20.415689       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I1101 00:46:20.416928       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I1101 00:46:20.417282       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I1101 00:46:20.418720       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1101 00:46:20.420997       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I1101 00:46:20.424290       1 shared_informer.go:318] Caches are synced for attach detach
	I1101 00:46:20.428890       1 shared_informer.go:318] Caches are synced for job
	I1101 00:46:20.428983       1 shared_informer.go:318] Caches are synced for endpoint
	I1101 00:46:20.434302       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1101 00:46:20.494467       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 00:46:20.510230       1 shared_informer.go:318] Caches are synced for stateful set
	I1101 00:46:20.538271       1 shared_informer.go:318] Caches are synced for disruption
	I1101 00:46:20.554026       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 00:46:20.902193       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 00:46:20.902310       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1101 00:46:20.956723       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-controller-manager [cab178bec38f9a3c5e477d390c044d5d08cf63740f84beb5df8499a8074bad2b] <==
	* I1101 00:45:42.916239       1 shared_informer.go:318] Caches are synced for crt configmap
	I1101 00:45:42.916624       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1101 00:45:42.982943       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 00:45:42.989887       1 shared_informer.go:318] Caches are synced for disruption
	I1101 00:45:43.019192       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1101 00:45:43.057155       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 00:45:43.443671       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 00:45:43.448298       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 00:45:43.448428       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1101 00:45:44.345668       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1101 00:45:44.375191       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-l9kk6"
	I1101 00:45:44.479035       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-lp248"
	I1101 00:45:44.527872       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-f45gz"
	I1101 00:45:44.556002       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1101 00:45:44.660993       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="325.642748ms"
	I1101 00:45:44.720442       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-lp248"
	I1101 00:45:44.773756       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="112.000484ms"
	I1101 00:45:44.833007       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="59.096925ms"
	I1101 00:45:44.836265       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="104.147µs"
	I1101 00:45:46.666339       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="112.438µs"
	I1101 00:45:46.717271       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="122.886µs"
	I1101 00:45:46.732329       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="60.96µs"
	I1101 00:45:46.735485       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="64.679µs"
	I1101 00:45:47.668666       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.900216ms"
	I1101 00:45:47.670661       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="97.021µs"
	
	* 
	* ==> kube-proxy [5ef0c9df2cfa1eddf797e9cf626df84a44695ddca78bc29dbaca6cc572a2bd1f] <==
	* I1101 00:45:45.734212       1 server_others.go:69] "Using iptables proxy"
	I1101 00:45:45.749281       1 node.go:141] Successfully retrieved node IP: 192.168.83.166
	I1101 00:45:45.797438       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1101 00:45:45.797512       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 00:45:45.800340       1 server_others.go:152] "Using iptables Proxier"
	I1101 00:45:45.800757       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 00:45:45.801191       1 server.go:846] "Version info" version="v1.28.3"
	I1101 00:45:45.801294       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 00:45:45.803318       1 config.go:188] "Starting service config controller"
	I1101 00:45:45.803564       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 00:45:45.803629       1 config.go:97] "Starting endpoint slice config controller"
	I1101 00:45:45.803652       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 00:45:45.804460       1 config.go:315] "Starting node config controller"
	I1101 00:45:45.804507       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 00:45:45.908966       1 shared_informer.go:318] Caches are synced for service config
	I1101 00:45:45.909319       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1101 00:45:45.909686       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [cda943c34c9fa9fa8ba74e1ba2e3586a69606f53cb45fd92e2fd4954a82e6677] <==
	* I1101 00:46:05.259085       1 server_others.go:69] "Using iptables proxy"
	I1101 00:46:08.042506       1 node.go:141] Successfully retrieved node IP: 192.168.83.166
	I1101 00:46:08.197414       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1101 00:46:08.197504       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 00:46:08.206766       1 server_others.go:152] "Using iptables Proxier"
	I1101 00:46:08.207004       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 00:46:08.207351       1 server.go:846] "Version info" version="v1.28.3"
	I1101 00:46:08.207447       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 00:46:08.209241       1 config.go:188] "Starting service config controller"
	I1101 00:46:08.209301       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 00:46:08.209349       1 config.go:97] "Starting endpoint slice config controller"
	I1101 00:46:08.209355       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 00:46:08.219405       1 config.go:315] "Starting node config controller"
	I1101 00:46:08.219497       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 00:46:08.309764       1 shared_informer.go:318] Caches are synced for service config
	I1101 00:46:08.310134       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1101 00:46:08.320447       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [1201aea9235f1fdc9c9b623c75a375f4cc07d43a0bacac462dbb5ab5d01dded9] <==
	* E1101 00:45:28.691171       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 00:45:28.735499       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1101 00:45:28.735535       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1101 00:45:28.868116       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1101 00:45:28.868265       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1101 00:45:28.901106       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1101 00:45:28.901175       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1101 00:45:28.905232       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1101 00:45:28.905291       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1101 00:45:28.912909       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1101 00:45:28.913039       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1101 00:45:28.963391       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1101 00:45:28.963437       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1101 00:45:28.997674       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1101 00:45:28.997750       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1101 00:45:29.022720       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1101 00:45:29.022775       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1101 00:45:29.055869       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1101 00:45:29.055919       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1101 00:45:29.087916       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1101 00:45:29.087969       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1101 00:45:30.928654       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 00:45:50.845948       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1101 00:45:50.846092       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E1101 00:45:50.855458       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [6fb41d2e45f09c62cd26743a54a7e781ea89ce7b2a8b5f5b571901aead7930ea] <==
	* I1101 00:46:05.327163       1 serving.go:348] Generated self-signed cert in-memory
	W1101 00:46:07.929696       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 00:46:07.930258       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 00:46:07.930477       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 00:46:07.930509       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 00:46:07.994079       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1101 00:46:07.994220       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 00:46:08.009376       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1101 00:46:08.012933       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1101 00:46:08.013052       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 00:46:08.013088       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 00:46:08.114225       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-11-01 00:44:58 UTC, ends at Wed 2023-11-01 00:46:27 UTC. --
	Nov 01 00:45:59 pause-582989 kubelet[1267]: E1101 00:45:59.544351    1267 kubelet.go:2473] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 01 00:45:59 pause-582989 kubelet[1267]: E1101 00:45:59.759184    1267 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="nil"
	Nov 01 00:45:59 pause-582989 kubelet[1267]: E1101 00:45:59.759253    1267 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 01 00:45:59 pause-582989 kubelet[1267]: E1101 00:45:59.759269    1267 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 01 00:46:00 pause-582989 kubelet[1267]: I1101 00:46:00.766969    1267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="288ca3a4d5af9189582424e258f73b69ce95709af369e2c94200064225472c12"
	Nov 01 00:46:00 pause-582989 kubelet[1267]: I1101 00:46:00.772106    1267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f1605d92fdab9bab77b7e3535f0de0769df84ef8d5f1f36b7f5248b7fc20523"
	Nov 01 00:46:00 pause-582989 kubelet[1267]: I1101 00:46:00.789050    1267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a809662ae1ca8168a9cdd42ed5b07fb545018c1f66640bd3148ec9f5c20aa8ce"
	Nov 01 00:46:00 pause-582989 kubelet[1267]: I1101 00:46:00.807344    1267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a486ebfbadaf73959d212e236f258ca733be27285f0bb18593257f306d463d7"
	Nov 01 00:46:00 pause-582989 kubelet[1267]: I1101 00:46:00.814345    1267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a5a9f64e392e71f23e2f8d636a7b8332fd7129cf1cef0414a38e5f445008dc6"
	Nov 01 00:46:00 pause-582989 kubelet[1267]: I1101 00:46:00.844524    1267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06a1a64dd52b65484827de8483fedc099c22799f946c7791e841d96918ef3b35"
	Nov 01 00:46:01 pause-582989 kubelet[1267]: I1101 00:46:01.542389    1267 status_manager.go:853] "Failed to get status for pod" podUID="58defa582c316c79b3d8f3f2b1f06708" pod="kube-system/etcd-pause-582989" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-582989\": dial tcp 192.168.83.166:8443: connect: connection refused"
	Nov 01 00:46:01 pause-582989 kubelet[1267]: I1101 00:46:01.543836    1267 status_manager.go:853] "Failed to get status for pod" podUID="2dfdf36a-3ee4-4786-9d57-131962bc4c88" pod="kube-system/kube-proxy-l9kk6" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l9kk6\": dial tcp 192.168.83.166:8443: connect: connection refused"
	Nov 01 00:46:01 pause-582989 kubelet[1267]: I1101 00:46:01.545206    1267 status_manager.go:853] "Failed to get status for pod" podUID="4b4ae73e-212e-4a24-a6d7-25ab15186ca8" pod="kube-system/coredns-5dd5756b68-f45gz" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-f45gz\": dial tcp 192.168.83.166:8443: connect: connection refused"
	Nov 01 00:46:01 pause-582989 kubelet[1267]: I1101 00:46:01.545955    1267 status_manager.go:853] "Failed to get status for pod" podUID="6c1ae1fffd851eec445a886d4c3ef691" pod="kube-system/kube-scheduler-pause-582989" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-582989\": dial tcp 192.168.83.166:8443: connect: connection refused"
	Nov 01 00:46:01 pause-582989 kubelet[1267]: I1101 00:46:01.546736    1267 status_manager.go:853] "Failed to get status for pod" podUID="2800027ab3fa7a2334199a818fc36bcd" pod="kube-system/kube-controller-manager-pause-582989" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-582989\": dial tcp 192.168.83.166:8443: connect: connection refused"
	Nov 01 00:46:01 pause-582989 kubelet[1267]: I1101 00:46:01.547350    1267 status_manager.go:853] "Failed to get status for pod" podUID="eeefc617d942f43c20a82588725d37c1" pod="kube-system/kube-apiserver-pause-582989" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-582989\": dial tcp 192.168.83.166:8443: connect: connection refused"
	Nov 01 00:46:02 pause-582989 kubelet[1267]: E1101 00:46:02.500010    1267 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-582989\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-582989?resourceVersion=0&timeout=10s\": dial tcp 192.168.83.166:8443: connect: connection refused"
	Nov 01 00:46:02 pause-582989 kubelet[1267]: E1101 00:46:02.500278    1267 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-582989\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-582989?timeout=10s\": dial tcp 192.168.83.166:8443: connect: connection refused"
	Nov 01 00:46:02 pause-582989 kubelet[1267]: E1101 00:46:02.500442    1267 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-582989\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-582989?timeout=10s\": dial tcp 192.168.83.166:8443: connect: connection refused"
	Nov 01 00:46:02 pause-582989 kubelet[1267]: E1101 00:46:02.500643    1267 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-582989\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-582989?timeout=10s\": dial tcp 192.168.83.166:8443: connect: connection refused"
	Nov 01 00:46:02 pause-582989 kubelet[1267]: E1101 00:46:02.500897    1267 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-582989\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-582989?timeout=10s\": dial tcp 192.168.83.166:8443: connect: connection refused"
	Nov 01 00:46:02 pause-582989 kubelet[1267]: E1101 00:46:02.500933    1267 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count"
	Nov 01 00:46:07 pause-582989 kubelet[1267]: E1101 00:46:07.947981    1267 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Nov 01 00:46:12 pause-582989 kubelet[1267]: I1101 00:46:12.568413    1267 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 00:46:12 pause-582989 kubelet[1267]: I1101 00:46:12.569576    1267 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 00:46:26.532183   44446 logs.go:266] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/17486-7305/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-582989 -n pause-582989
helpers_test.go:261: (dbg) Run:  kubectl --context pause-582989 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (38.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-008483 --alsologtostderr -v=3
E1101 00:52:44.075318   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/kindnet-090856/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-008483 --alsologtostderr -v=3: exit status 82 (2m1.138935594s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-008483"  ...
	* Stopping node "no-preload-008483"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 00:52:43.871816   57676 out.go:296] Setting OutFile to fd 1 ...
	I1101 00:52:43.872149   57676 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:52:43.872163   57676 out.go:309] Setting ErrFile to fd 2...
	I1101 00:52:43.872171   57676 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:52:43.872372   57676 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7305/.minikube/bin
	I1101 00:52:43.872626   57676 out.go:303] Setting JSON to false
	I1101 00:52:43.872766   57676 mustload.go:65] Loading cluster: no-preload-008483
	I1101 00:52:43.873117   57676 config.go:182] Loaded profile config "no-preload-008483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:52:43.873216   57676 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/config.json ...
	I1101 00:52:43.873387   57676 mustload.go:65] Loading cluster: no-preload-008483
	I1101 00:52:43.873499   57676 config.go:182] Loaded profile config "no-preload-008483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:52:43.873533   57676 stop.go:39] StopHost: no-preload-008483
	I1101 00:52:43.873928   57676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 00:52:43.874005   57676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:52:43.891019   57676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36625
	I1101 00:52:43.891543   57676 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:52:43.892228   57676 main.go:141] libmachine: Using API Version  1
	I1101 00:52:43.892267   57676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:52:43.892672   57676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:52:43.895102   57676 out.go:177] * Stopping node "no-preload-008483"  ...
	I1101 00:52:43.897063   57676 main.go:141] libmachine: Stopping "no-preload-008483"...
	I1101 00:52:43.897085   57676 main.go:141] libmachine: (no-preload-008483) Calling .GetState
	I1101 00:52:43.899211   57676 main.go:141] libmachine: (no-preload-008483) Calling .Stop
	I1101 00:52:43.904034   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 0/60
	I1101 00:52:44.905581   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 1/60
	I1101 00:52:45.907926   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 2/60
	I1101 00:52:46.909578   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 3/60
	I1101 00:52:47.911113   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 4/60
	I1101 00:52:48.913464   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 5/60
	I1101 00:52:49.915724   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 6/60
	I1101 00:52:50.917450   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 7/60
	I1101 00:52:51.918774   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 8/60
	I1101 00:52:52.920147   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 9/60
	I1101 00:52:53.921408   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 10/60
	I1101 00:52:54.923144   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 11/60
	I1101 00:52:55.924642   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 12/60
	I1101 00:52:56.926144   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 13/60
	I1101 00:52:57.927576   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 14/60
	I1101 00:52:58.929152   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 15/60
	I1101 00:52:59.930750   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 16/60
	I1101 00:53:00.932573   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 17/60
	I1101 00:53:01.934493   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 18/60
	I1101 00:53:02.935893   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 19/60
	I1101 00:53:03.938889   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 20/60
	I1101 00:53:04.941319   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 21/60
	I1101 00:53:05.942834   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 22/60
	I1101 00:53:06.944288   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 23/60
	I1101 00:53:07.946062   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 24/60
	I1101 00:53:08.948093   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 25/60
	I1101 00:53:09.949929   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 26/60
	I1101 00:53:10.951762   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 27/60
	I1101 00:53:11.953159   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 28/60
	I1101 00:53:12.954685   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 29/60
	I1101 00:53:13.956144   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 30/60
	I1101 00:53:14.957515   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 31/60
	I1101 00:53:15.959120   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 32/60
	I1101 00:53:16.960595   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 33/60
	I1101 00:53:17.962658   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 34/60
	I1101 00:53:18.964550   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 35/60
	I1101 00:53:19.966273   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 36/60
	I1101 00:53:20.967798   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 37/60
	I1101 00:53:21.969393   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 38/60
	I1101 00:53:22.971286   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 39/60
	I1101 00:53:23.973612   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 40/60
	I1101 00:53:24.975092   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 41/60
	I1101 00:53:25.976801   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 42/60
	I1101 00:53:26.978543   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 43/60
	I1101 00:53:27.980651   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 44/60
	I1101 00:53:28.982390   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 45/60
	I1101 00:53:29.983823   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 46/60
	I1101 00:53:30.985825   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 47/60
	I1101 00:53:31.987413   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 48/60
	I1101 00:53:32.989103   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 49/60
	I1101 00:53:33.991160   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 50/60
	I1101 00:53:34.992642   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 51/60
	I1101 00:53:35.995366   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 52/60
	I1101 00:53:36.996593   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 53/60
	I1101 00:53:37.998024   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 54/60
	I1101 00:53:39.000270   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 55/60
	I1101 00:53:40.001885   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 56/60
	I1101 00:53:41.003331   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 57/60
	I1101 00:53:42.005174   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 58/60
	I1101 00:53:43.006739   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 59/60
	I1101 00:53:44.008188   57676 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1101 00:53:44.008257   57676 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1101 00:53:44.008280   57676 retry.go:31] will retry after 807.493285ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I1101 00:53:44.816240   57676 stop.go:39] StopHost: no-preload-008483
	I1101 00:53:44.816606   57676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 00:53:44.816678   57676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:53:44.832188   57676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41629
	I1101 00:53:44.832650   57676 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:53:44.833176   57676 main.go:141] libmachine: Using API Version  1
	I1101 00:53:44.833198   57676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:53:44.833517   57676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:53:44.835464   57676 out.go:177] * Stopping node "no-preload-008483"  ...
	I1101 00:53:44.836978   57676 main.go:141] libmachine: Stopping "no-preload-008483"...
	I1101 00:53:44.836994   57676 main.go:141] libmachine: (no-preload-008483) Calling .GetState
	I1101 00:53:44.838952   57676 main.go:141] libmachine: (no-preload-008483) Calling .Stop
	I1101 00:53:44.842674   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 0/60
	I1101 00:53:45.844122   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 1/60
	I1101 00:53:46.846319   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 2/60
	I1101 00:53:47.847865   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 3/60
	I1101 00:53:48.849446   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 4/60
	I1101 00:53:49.851853   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 5/60
	I1101 00:53:50.853423   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 6/60
	I1101 00:53:51.854884   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 7/60
	I1101 00:53:52.856511   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 8/60
	I1101 00:53:53.857911   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 9/60
	I1101 00:53:54.859636   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 10/60
	I1101 00:53:55.861212   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 11/60
	I1101 00:53:56.862614   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 12/60
	I1101 00:53:57.864023   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 13/60
	I1101 00:53:58.865912   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 14/60
	I1101 00:53:59.867796   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 15/60
	I1101 00:54:00.869217   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 16/60
	I1101 00:54:01.870511   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 17/60
	I1101 00:54:02.872027   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 18/60
	I1101 00:54:03.873277   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 19/60
	I1101 00:54:04.875573   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 20/60
	I1101 00:54:05.876998   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 21/60
	I1101 00:54:06.878389   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 22/60
	I1101 00:54:07.879845   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 23/60
	I1101 00:54:08.881267   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 24/60
	I1101 00:54:09.882571   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 25/60
	I1101 00:54:10.884087   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 26/60
	I1101 00:54:11.885424   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 27/60
	I1101 00:54:12.886899   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 28/60
	I1101 00:54:13.888332   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 29/60
	I1101 00:54:14.890236   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 30/60
	I1101 00:54:15.891972   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 31/60
	I1101 00:54:16.893580   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 32/60
	I1101 00:54:17.895100   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 33/60
	I1101 00:54:18.896396   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 34/60
	I1101 00:54:19.898876   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 35/60
	I1101 00:54:20.900372   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 36/60
	I1101 00:54:21.901775   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 37/60
	I1101 00:54:22.903080   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 38/60
	I1101 00:54:23.904702   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 39/60
	I1101 00:54:24.907280   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 40/60
	I1101 00:54:25.908849   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 41/60
	I1101 00:54:26.910404   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 42/60
	I1101 00:54:27.912060   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 43/60
	I1101 00:54:28.913718   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 44/60
	I1101 00:54:29.916226   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 45/60
	I1101 00:54:30.917665   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 46/60
	I1101 00:54:31.919139   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 47/60
	I1101 00:54:32.920741   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 48/60
	I1101 00:54:33.922510   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 49/60
	I1101 00:54:34.924044   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 50/60
	I1101 00:54:35.925544   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 51/60
	I1101 00:54:36.927237   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 52/60
	I1101 00:54:37.928607   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 53/60
	I1101 00:54:38.929976   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 54/60
	I1101 00:54:39.931998   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 55/60
	I1101 00:54:40.933369   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 56/60
	I1101 00:54:41.935093   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 57/60
	I1101 00:54:42.937085   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 58/60
	I1101 00:54:43.939043   57676 main.go:141] libmachine: (no-preload-008483) Waiting for machine to stop 59/60
	I1101 00:54:44.940020   57676 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1101 00:54:44.940062   57676 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1101 00:54:44.941987   57676 out.go:177] 
	W1101 00:54:44.943614   57676 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1101 00:54:44.943640   57676 out.go:239] * 
	* 
	W1101 00:54:44.946128   57676 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 00:54:44.947794   57676 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-008483 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-008483 -n no-preload-008483
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-008483 -n no-preload-008483: exit status 3 (18.522821328s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 00:55:03.472272   58326 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.140:22: connect: no route to host
	E1101 00:55:03.472301   58326 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.140:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-008483" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-754132 --alsologtostderr -v=3
E1101 00:52:48.557337   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/kindnet-090856/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-754132 --alsologtostderr -v=3: exit status 82 (2m1.029557529s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-754132"  ...
	* Stopping node "embed-certs-754132"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 00:52:46.351124   57763 out.go:296] Setting OutFile to fd 1 ...
	I1101 00:52:46.351378   57763 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:52:46.351387   57763 out.go:309] Setting ErrFile to fd 2...
	I1101 00:52:46.351391   57763 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:52:46.351562   57763 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7305/.minikube/bin
	I1101 00:52:46.351785   57763 out.go:303] Setting JSON to false
	I1101 00:52:46.351865   57763 mustload.go:65] Loading cluster: embed-certs-754132
	I1101 00:52:46.352214   57763 config.go:182] Loaded profile config "embed-certs-754132": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:52:46.352280   57763 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/config.json ...
	I1101 00:52:46.352493   57763 mustload.go:65] Loading cluster: embed-certs-754132
	I1101 00:52:46.352619   57763 config.go:182] Loaded profile config "embed-certs-754132": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:52:46.352644   57763 stop.go:39] StopHost: embed-certs-754132
	I1101 00:52:46.353009   57763 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 00:52:46.353070   57763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:52:46.368715   57763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44457
	I1101 00:52:46.369155   57763 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:52:46.369745   57763 main.go:141] libmachine: Using API Version  1
	I1101 00:52:46.369781   57763 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:52:46.370212   57763 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:52:46.374106   57763 out.go:177] * Stopping node "embed-certs-754132"  ...
	I1101 00:52:46.375614   57763 main.go:141] libmachine: Stopping "embed-certs-754132"...
	I1101 00:52:46.375632   57763 main.go:141] libmachine: (embed-certs-754132) Calling .GetState
	I1101 00:52:46.377515   57763 main.go:141] libmachine: (embed-certs-754132) Calling .Stop
	I1101 00:52:46.381213   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 0/60
	I1101 00:52:47.383471   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 1/60
	I1101 00:52:48.384880   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 2/60
	I1101 00:52:49.386534   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 3/60
	I1101 00:52:50.388038   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 4/60
	I1101 00:52:51.390216   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 5/60
	I1101 00:52:52.391730   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 6/60
	I1101 00:52:53.393456   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 7/60
	I1101 00:52:54.395312   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 8/60
	I1101 00:52:55.397559   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 9/60
	I1101 00:52:56.399240   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 10/60
	I1101 00:52:57.401842   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 11/60
	I1101 00:52:58.403591   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 12/60
	I1101 00:52:59.405477   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 13/60
	I1101 00:53:00.406801   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 14/60
	I1101 00:53:01.409184   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 15/60
	I1101 00:53:02.410735   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 16/60
	I1101 00:53:03.412186   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 17/60
	I1101 00:53:04.414489   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 18/60
	I1101 00:53:05.415886   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 19/60
	I1101 00:53:06.418440   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 20/60
	I1101 00:53:07.419819   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 21/60
	I1101 00:53:08.421241   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 22/60
	I1101 00:53:09.422593   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 23/60
	I1101 00:53:10.423916   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 24/60
	I1101 00:53:11.426175   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 25/60
	I1101 00:53:12.427562   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 26/60
	I1101 00:53:13.429162   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 27/60
	I1101 00:53:14.430417   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 28/60
	I1101 00:53:15.431819   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 29/60
	I1101 00:53:16.433946   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 30/60
	I1101 00:53:17.435313   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 31/60
	I1101 00:53:18.436965   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 32/60
	I1101 00:53:19.438839   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 33/60
	I1101 00:53:20.440130   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 34/60
	I1101 00:53:21.442168   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 35/60
	I1101 00:53:22.443630   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 36/60
	I1101 00:53:23.445005   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 37/60
	I1101 00:53:24.446560   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 38/60
	I1101 00:53:25.447997   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 39/60
	I1101 00:53:26.449272   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 40/60
	I1101 00:53:27.450918   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 41/60
	I1101 00:53:28.452508   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 42/60
	I1101 00:53:29.454040   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 43/60
	I1101 00:53:30.455365   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 44/60
	I1101 00:53:31.457575   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 45/60
	I1101 00:53:32.458709   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 46/60
	I1101 00:53:33.460205   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 47/60
	I1101 00:53:34.461398   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 48/60
	I1101 00:53:35.462899   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 49/60
	I1101 00:53:36.465226   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 50/60
	I1101 00:53:37.466475   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 51/60
	I1101 00:53:38.467742   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 52/60
	I1101 00:53:39.469007   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 53/60
	I1101 00:53:40.470212   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 54/60
	I1101 00:53:41.472218   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 55/60
	I1101 00:53:42.474531   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 56/60
	I1101 00:53:43.476026   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 57/60
	I1101 00:53:44.477532   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 58/60
	I1101 00:53:45.478857   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 59/60
	I1101 00:53:46.480343   57763 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1101 00:53:46.480386   57763 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1101 00:53:46.480402   57763 retry.go:31] will retry after 706.86609ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I1101 00:53:47.188318   57763 stop.go:39] StopHost: embed-certs-754132
	I1101 00:53:47.188691   57763 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 00:53:47.188760   57763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:53:47.203264   57763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33997
	I1101 00:53:47.203692   57763 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:53:47.204164   57763 main.go:141] libmachine: Using API Version  1
	I1101 00:53:47.204187   57763 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:53:47.204553   57763 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:53:47.206824   57763 out.go:177] * Stopping node "embed-certs-754132"  ...
	I1101 00:53:47.208427   57763 main.go:141] libmachine: Stopping "embed-certs-754132"...
	I1101 00:53:47.208449   57763 main.go:141] libmachine: (embed-certs-754132) Calling .GetState
	I1101 00:53:47.210242   57763 main.go:141] libmachine: (embed-certs-754132) Calling .Stop
	I1101 00:53:47.213829   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 0/60
	I1101 00:53:48.215378   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 1/60
	I1101 00:53:49.216699   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 2/60
	I1101 00:53:50.218191   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 3/60
	I1101 00:53:51.219748   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 4/60
	I1101 00:53:52.222133   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 5/60
	I1101 00:53:53.224321   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 6/60
	I1101 00:53:54.226347   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 7/60
	I1101 00:53:55.227977   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 8/60
	I1101 00:53:56.229470   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 9/60
	I1101 00:53:57.231822   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 10/60
	I1101 00:53:58.233268   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 11/60
	I1101 00:53:59.234699   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 12/60
	I1101 00:54:00.236413   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 13/60
	I1101 00:54:01.237632   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 14/60
	I1101 00:54:02.239301   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 15/60
	I1101 00:54:03.240817   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 16/60
	I1101 00:54:04.242238   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 17/60
	I1101 00:54:05.243769   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 18/60
	I1101 00:54:06.245247   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 19/60
	I1101 00:54:07.246744   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 20/60
	I1101 00:54:08.248205   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 21/60
	I1101 00:54:09.249770   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 22/60
	I1101 00:54:10.251278   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 23/60
	I1101 00:54:11.252542   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 24/60
	I1101 00:54:12.254314   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 25/60
	I1101 00:54:13.255703   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 26/60
	I1101 00:54:14.257018   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 27/60
	I1101 00:54:15.258526   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 28/60
	I1101 00:54:16.259884   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 29/60
	I1101 00:54:17.261876   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 30/60
	I1101 00:54:18.263364   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 31/60
	I1101 00:54:19.264765   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 32/60
	I1101 00:54:20.266235   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 33/60
	I1101 00:54:21.267490   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 34/60
	I1101 00:54:22.269047   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 35/60
	I1101 00:54:23.270251   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 36/60
	I1101 00:54:24.271544   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 37/60
	I1101 00:54:25.272890   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 38/60
	I1101 00:54:26.274602   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 39/60
	I1101 00:54:27.276517   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 40/60
	I1101 00:54:28.278907   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 41/60
	I1101 00:54:29.280410   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 42/60
	I1101 00:54:30.281646   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 43/60
	I1101 00:54:31.283014   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 44/60
	I1101 00:54:32.284809   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 45/60
	I1101 00:54:33.286132   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 46/60
	I1101 00:54:34.287904   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 47/60
	I1101 00:54:35.289329   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 48/60
	I1101 00:54:36.290838   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 49/60
	I1101 00:54:37.293280   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 50/60
	I1101 00:54:38.294680   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 51/60
	I1101 00:54:39.296458   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 52/60
	I1101 00:54:40.297844   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 53/60
	I1101 00:54:41.299454   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 54/60
	I1101 00:54:42.301400   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 55/60
	I1101 00:54:43.302802   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 56/60
	I1101 00:54:44.304482   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 57/60
	I1101 00:54:45.306277   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 58/60
	I1101 00:54:46.307600   57763 main.go:141] libmachine: (embed-certs-754132) Waiting for machine to stop 59/60
	I1101 00:54:47.308675   57763 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1101 00:54:47.308725   57763 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1101 00:54:47.310837   57763 out.go:177] 
	W1101 00:54:47.312575   57763 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1101 00:54:47.312595   57763 out.go:239] * 
	* 
	W1101 00:54:47.314961   57763 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 00:54:47.316652   57763 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-754132 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-754132 -n embed-certs-754132
E1101 00:54:51.059654   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/custom-flannel-090856/client.crt: no such file or directory
E1101 00:54:51.064949   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/custom-flannel-090856/client.crt: no such file or directory
E1101 00:54:51.075205   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/custom-flannel-090856/client.crt: no such file or directory
E1101 00:54:51.095561   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/custom-flannel-090856/client.crt: no such file or directory
E1101 00:54:51.135914   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/custom-flannel-090856/client.crt: no such file or directory
E1101 00:54:51.216707   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/custom-flannel-090856/client.crt: no such file or directory
E1101 00:54:51.377313   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/custom-flannel-090856/client.crt: no such file or directory
E1101 00:54:51.698050   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/custom-flannel-090856/client.crt: no such file or directory
E1101 00:54:52.338322   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/custom-flannel-090856/client.crt: no such file or directory
E1101 00:54:53.619388   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/custom-flannel-090856/client.crt: no such file or directory
E1101 00:54:54.370667   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/calico-090856/client.crt: no such file or directory
E1101 00:54:56.180489   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/custom-flannel-090856/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-754132 -n embed-certs-754132: exit status 3 (18.454085963s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 00:55:05.772276   58356 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.83:22: connect: no route to host
	E1101 00:55:05.772299   58356 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.83:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-754132" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (139.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-330042 --alsologtostderr -v=3
E1101 00:53:02.504329   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt: no such file or directory
E1101 00:53:03.919382   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/kindnet-090856/client.crt: no such file or directory
E1101 00:53:24.399922   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/kindnet-090856/client.crt: no such file or directory
E1101 00:53:32.448554   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/calico-090856/client.crt: no such file or directory
E1101 00:53:32.453847   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/calico-090856/client.crt: no such file or directory
E1101 00:53:32.464073   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/calico-090856/client.crt: no such file or directory
E1101 00:53:32.484368   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/calico-090856/client.crt: no such file or directory
E1101 00:53:32.524945   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/calico-090856/client.crt: no such file or directory
E1101 00:53:32.605383   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/calico-090856/client.crt: no such file or directory
E1101 00:53:32.765881   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/calico-090856/client.crt: no such file or directory
E1101 00:53:33.086488   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/calico-090856/client.crt: no such file or directory
E1101 00:53:33.726960   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/calico-090856/client.crt: no such file or directory
E1101 00:53:35.007972   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/calico-090856/client.crt: no such file or directory
E1101 00:53:37.568880   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/calico-090856/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-330042 --alsologtostderr -v=3: exit status 82 (2m1.335443359s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-330042"  ...
	* Stopping node "old-k8s-version-330042"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 00:52:55.126640   57860 out.go:296] Setting OutFile to fd 1 ...
	I1101 00:52:55.126769   57860 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:52:55.126779   57860 out.go:309] Setting ErrFile to fd 2...
	I1101 00:52:55.126786   57860 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:52:55.126985   57860 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7305/.minikube/bin
	I1101 00:52:55.127233   57860 out.go:303] Setting JSON to false
	I1101 00:52:55.127323   57860 mustload.go:65] Loading cluster: old-k8s-version-330042
	I1101 00:52:55.127676   57860 config.go:182] Loaded profile config "old-k8s-version-330042": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1101 00:52:55.127739   57860 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/config.json ...
	I1101 00:52:55.127904   57860 mustload.go:65] Loading cluster: old-k8s-version-330042
	I1101 00:52:55.128066   57860 config.go:182] Loaded profile config "old-k8s-version-330042": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1101 00:52:55.128096   57860 stop.go:39] StopHost: old-k8s-version-330042
	I1101 00:52:55.128478   57860 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 00:52:55.128536   57860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:52:55.143427   57860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37241
	I1101 00:52:55.143951   57860 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:52:55.144610   57860 main.go:141] libmachine: Using API Version  1
	I1101 00:52:55.144634   57860 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:52:55.145008   57860 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:52:55.147562   57860 out.go:177] * Stopping node "old-k8s-version-330042"  ...
	I1101 00:52:55.148912   57860 main.go:141] libmachine: Stopping "old-k8s-version-330042"...
	I1101 00:52:55.148943   57860 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetState
	I1101 00:52:55.150721   57860 main.go:141] libmachine: (old-k8s-version-330042) Calling .Stop
	I1101 00:52:55.154508   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 0/60
	I1101 00:52:56.156493   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 1/60
	I1101 00:52:57.158231   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 2/60
	I1101 00:52:58.160716   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 3/60
	I1101 00:52:59.162469   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 4/60
	I1101 00:53:00.163836   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 5/60
	I1101 00:53:01.165349   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 6/60
	I1101 00:53:02.166818   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 7/60
	I1101 00:53:03.168961   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 8/60
	I1101 00:53:04.170532   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 9/60
	I1101 00:53:05.172862   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 10/60
	I1101 00:53:06.174430   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 11/60
	I1101 00:53:07.175884   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 12/60
	I1101 00:53:08.177484   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 13/60
	I1101 00:53:09.178902   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 14/60
	I1101 00:53:10.181107   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 15/60
	I1101 00:53:11.182739   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 16/60
	I1101 00:53:12.184188   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 17/60
	I1101 00:53:13.186935   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 18/60
	I1101 00:53:14.188333   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 19/60
	I1101 00:53:15.190103   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 20/60
	I1101 00:53:16.192507   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 21/60
	I1101 00:53:17.194596   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 22/60
	I1101 00:53:18.196083   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 23/60
	I1101 00:53:19.197488   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 24/60
	I1101 00:53:20.199418   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 25/60
	I1101 00:53:21.201046   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 26/60
	I1101 00:53:22.203016   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 27/60
	I1101 00:53:23.204852   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 28/60
	I1101 00:53:24.206430   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 29/60
	I1101 00:53:25.207875   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 30/60
	I1101 00:53:26.209375   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 31/60
	I1101 00:53:27.211365   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 32/60
	I1101 00:53:28.212926   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 33/60
	I1101 00:53:29.214675   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 34/60
	I1101 00:53:30.216823   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 35/60
	I1101 00:53:31.218405   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 36/60
	I1101 00:53:32.219999   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 37/60
	I1101 00:53:33.221409   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 38/60
	I1101 00:53:34.222746   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 39/60
	I1101 00:53:35.225029   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 40/60
	I1101 00:53:36.226407   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 41/60
	I1101 00:53:37.227630   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 42/60
	I1101 00:53:38.229334   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 43/60
	I1101 00:53:39.230794   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 44/60
	I1101 00:53:40.232713   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 45/60
	I1101 00:53:41.234401   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 46/60
	I1101 00:53:42.235779   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 47/60
	I1101 00:53:43.237360   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 48/60
	I1101 00:53:44.238880   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 49/60
	I1101 00:53:45.241041   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 50/60
	I1101 00:53:46.244000   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 51/60
	I1101 00:53:47.245321   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 52/60
	I1101 00:53:48.246765   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 53/60
	I1101 00:53:49.248269   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 54/60
	I1101 00:53:50.250346   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 55/60
	I1101 00:53:51.251781   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 56/60
	I1101 00:53:52.253225   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 57/60
	I1101 00:53:53.254924   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 58/60
	I1101 00:53:54.256495   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 59/60
	I1101 00:53:55.257882   57860 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1101 00:53:55.257949   57860 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1101 00:53:55.257967   57860 retry.go:31] will retry after 1.011264341s: Temporary Error: stop: unable to stop vm, current state "Running"
	I1101 00:53:56.270109   57860 stop.go:39] StopHost: old-k8s-version-330042
	I1101 00:53:56.270728   57860 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 00:53:56.270816   57860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:53:56.287064   57860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41871
	I1101 00:53:56.287532   57860 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:53:56.287990   57860 main.go:141] libmachine: Using API Version  1
	I1101 00:53:56.288006   57860 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:53:56.288358   57860 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:53:56.290489   57860 out.go:177] * Stopping node "old-k8s-version-330042"  ...
	I1101 00:53:56.291829   57860 main.go:141] libmachine: Stopping "old-k8s-version-330042"...
	I1101 00:53:56.291843   57860 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetState
	I1101 00:53:56.293954   57860 main.go:141] libmachine: (old-k8s-version-330042) Calling .Stop
	I1101 00:53:56.297512   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 0/60
	I1101 00:53:57.299038   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 1/60
	I1101 00:53:58.300444   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 2/60
	I1101 00:53:59.302041   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 3/60
	I1101 00:54:00.303503   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 4/60
	I1101 00:54:01.305426   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 5/60
	I1101 00:54:02.306882   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 6/60
	I1101 00:54:03.308421   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 7/60
	I1101 00:54:04.309805   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 8/60
	I1101 00:54:05.311208   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 9/60
	I1101 00:54:06.313405   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 10/60
	I1101 00:54:07.314753   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 11/60
	I1101 00:54:08.316298   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 12/60
	I1101 00:54:09.317665   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 13/60
	I1101 00:54:10.318991   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 14/60
	I1101 00:54:11.320967   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 15/60
	I1101 00:54:12.322551   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 16/60
	I1101 00:54:13.324035   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 17/60
	I1101 00:54:14.325530   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 18/60
	I1101 00:54:15.326821   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 19/60
	I1101 00:54:16.328786   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 20/60
	I1101 00:54:17.330494   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 21/60
	I1101 00:54:18.332270   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 22/60
	I1101 00:54:19.333688   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 23/60
	I1101 00:54:20.334963   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 24/60
	I1101 00:54:21.336753   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 25/60
	I1101 00:54:22.338450   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 26/60
	I1101 00:54:23.339779   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 27/60
	I1101 00:54:24.341576   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 28/60
	I1101 00:54:25.342959   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 29/60
	I1101 00:54:26.345290   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 30/60
	I1101 00:54:27.346797   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 31/60
	I1101 00:54:28.348167   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 32/60
	I1101 00:54:29.349767   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 33/60
	I1101 00:54:30.351406   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 34/60
	I1101 00:54:31.353356   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 35/60
	I1101 00:54:32.354820   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 36/60
	I1101 00:54:33.356181   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 37/60
	I1101 00:54:34.358394   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 38/60
	I1101 00:54:35.359751   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 39/60
	I1101 00:54:36.361651   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 40/60
	I1101 00:54:37.362998   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 41/60
	I1101 00:54:38.365040   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 42/60
	I1101 00:54:39.366522   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 43/60
	I1101 00:54:40.367886   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 44/60
	I1101 00:54:41.370009   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 45/60
	I1101 00:54:42.371829   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 46/60
	I1101 00:54:43.373581   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 47/60
	I1101 00:54:44.375051   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 48/60
	I1101 00:54:45.376425   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 49/60
	I1101 00:54:46.378153   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 50/60
	I1101 00:54:47.379448   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 51/60
	I1101 00:54:48.380891   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 52/60
	I1101 00:54:49.382279   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 53/60
	I1101 00:54:50.383675   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 54/60
	I1101 00:54:51.385297   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 55/60
	I1101 00:54:52.386661   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 56/60
	I1101 00:54:53.388260   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 57/60
	I1101 00:54:54.389700   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 58/60
	I1101 00:54:55.391084   57860 main.go:141] libmachine: (old-k8s-version-330042) Waiting for machine to stop 59/60
	I1101 00:54:56.392169   57860 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1101 00:54:56.392213   57860 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1101 00:54:56.394401   57860 out.go:177] 
	W1101 00:54:56.396113   57860 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1101 00:54:56.396140   57860 out.go:239] * 
	* 
	W1101 00:54:56.398642   57860 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 00:54:56.400054   57860 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-330042 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-330042 -n old-k8s-version-330042
E1101 00:55:01.301248   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/custom-flannel-090856/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-330042 -n old-k8s-version-330042: exit status 3 (18.586117129s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 00:55:14.988256   58414 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.90:22: connect: no route to host
	E1101 00:55:14.988277   58414 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.90:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-330042" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (139.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-639310 --alsologtostderr -v=3
E1101 00:53:57.967563   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/auto-090856/client.crt: no such file or directory
E1101 00:54:05.360575   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/kindnet-090856/client.crt: no such file or directory
E1101 00:54:13.410430   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/calico-090856/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-639310 --alsologtostderr -v=3: exit status 82 (2m1.005145149s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-639310"  ...
	* Stopping node "default-k8s-diff-port-639310"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 00:53:53.083064   58177 out.go:296] Setting OutFile to fd 1 ...
	I1101 00:53:53.083167   58177 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:53:53.083178   58177 out.go:309] Setting ErrFile to fd 2...
	I1101 00:53:53.083184   58177 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:53:53.083395   58177 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7305/.minikube/bin
	I1101 00:53:53.083614   58177 out.go:303] Setting JSON to false
	I1101 00:53:53.083701   58177 mustload.go:65] Loading cluster: default-k8s-diff-port-639310
	I1101 00:53:53.084117   58177 config.go:182] Loaded profile config "default-k8s-diff-port-639310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:53:53.084186   58177 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/config.json ...
	I1101 00:53:53.084356   58177 mustload.go:65] Loading cluster: default-k8s-diff-port-639310
	I1101 00:53:53.084465   58177 config.go:182] Loaded profile config "default-k8s-diff-port-639310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:53:53.084487   58177 stop.go:39] StopHost: default-k8s-diff-port-639310
	I1101 00:53:53.084845   58177 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 00:53:53.084895   58177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:53:53.099591   58177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39411
	I1101 00:53:53.100046   58177 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:53:53.100627   58177 main.go:141] libmachine: Using API Version  1
	I1101 00:53:53.100652   58177 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:53:53.100976   58177 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:53:53.103621   58177 out.go:177] * Stopping node "default-k8s-diff-port-639310"  ...
	I1101 00:53:53.105504   58177 main.go:141] libmachine: Stopping "default-k8s-diff-port-639310"...
	I1101 00:53:53.105521   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetState
	I1101 00:53:53.107141   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Stop
	I1101 00:53:53.110188   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 0/60
	I1101 00:53:54.111548   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 1/60
	I1101 00:53:55.113121   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 2/60
	I1101 00:53:56.114833   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 3/60
	I1101 00:53:57.116519   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 4/60
	I1101 00:53:58.118557   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 5/60
	I1101 00:53:59.120322   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 6/60
	I1101 00:54:00.121841   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 7/60
	I1101 00:54:01.123394   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 8/60
	I1101 00:54:02.125001   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 9/60
	I1101 00:54:03.126209   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 10/60
	I1101 00:54:04.127762   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 11/60
	I1101 00:54:05.129555   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 12/60
	I1101 00:54:06.131143   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 13/60
	I1101 00:54:07.132588   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 14/60
	I1101 00:54:08.134694   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 15/60
	I1101 00:54:09.136279   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 16/60
	I1101 00:54:10.137816   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 17/60
	I1101 00:54:11.139509   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 18/60
	I1101 00:54:12.141209   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 19/60
	I1101 00:54:13.143424   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 20/60
	I1101 00:54:14.144874   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 21/60
	I1101 00:54:15.147253   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 22/60
	I1101 00:54:16.148795   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 23/60
	I1101 00:54:17.150493   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 24/60
	I1101 00:54:18.152866   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 25/60
	I1101 00:54:19.154425   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 26/60
	I1101 00:54:20.155754   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 27/60
	I1101 00:54:21.157180   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 28/60
	I1101 00:54:22.158559   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 29/60
	I1101 00:54:23.160770   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 30/60
	I1101 00:54:24.162341   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 31/60
	I1101 00:54:25.163523   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 32/60
	I1101 00:54:26.165344   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 33/60
	I1101 00:54:27.166829   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 34/60
	I1101 00:54:28.168990   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 35/60
	I1101 00:54:29.170613   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 36/60
	I1101 00:54:30.172243   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 37/60
	I1101 00:54:31.173866   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 38/60
	I1101 00:54:32.175328   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 39/60
	I1101 00:54:33.177671   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 40/60
	I1101 00:54:34.179146   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 41/60
	I1101 00:54:35.180738   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 42/60
	I1101 00:54:36.182282   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 43/60
	I1101 00:54:37.183817   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 44/60
	I1101 00:54:38.186075   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 45/60
	I1101 00:54:39.187513   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 46/60
	I1101 00:54:40.188968   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 47/60
	I1101 00:54:41.190383   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 48/60
	I1101 00:54:42.192450   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 49/60
	I1101 00:54:43.194793   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 50/60
	I1101 00:54:44.196543   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 51/60
	I1101 00:54:45.197737   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 52/60
	I1101 00:54:46.199177   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 53/60
	I1101 00:54:47.200635   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 54/60
	I1101 00:54:48.202705   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 55/60
	I1101 00:54:49.204355   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 56/60
	I1101 00:54:50.205865   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 57/60
	I1101 00:54:51.207370   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 58/60
	I1101 00:54:52.208919   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 59/60
	I1101 00:54:53.209420   58177 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1101 00:54:53.209504   58177 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1101 00:54:53.209521   58177 retry.go:31] will retry after 688.796235ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I1101 00:54:53.898423   58177 stop.go:39] StopHost: default-k8s-diff-port-639310
	I1101 00:54:53.898811   58177 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 00:54:53.898856   58177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:54:53.913698   58177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45769
	I1101 00:54:53.914188   58177 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:54:53.914667   58177 main.go:141] libmachine: Using API Version  1
	I1101 00:54:53.914695   58177 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:54:53.915062   58177 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:54:53.917521   58177 out.go:177] * Stopping node "default-k8s-diff-port-639310"  ...
	I1101 00:54:53.919036   58177 main.go:141] libmachine: Stopping "default-k8s-diff-port-639310"...
	I1101 00:54:53.919067   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetState
	I1101 00:54:53.920897   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Stop
	I1101 00:54:53.924872   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 0/60
	I1101 00:54:54.926221   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 1/60
	I1101 00:54:55.927742   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 2/60
	I1101 00:54:56.929305   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 3/60
	I1101 00:54:57.931028   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 4/60
	I1101 00:54:58.932842   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 5/60
	I1101 00:54:59.934528   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 6/60
	I1101 00:55:00.936205   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 7/60
	I1101 00:55:01.937689   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 8/60
	I1101 00:55:02.939040   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 9/60
	I1101 00:55:03.941144   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 10/60
	I1101 00:55:04.942699   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 11/60
	I1101 00:55:05.944509   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 12/60
	I1101 00:55:06.946061   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 13/60
	I1101 00:55:07.947481   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 14/60
	I1101 00:55:08.948914   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 15/60
	I1101 00:55:09.950418   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 16/60
	I1101 00:55:10.952096   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 17/60
	I1101 00:55:11.953615   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 18/60
	I1101 00:55:12.954840   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 19/60
	I1101 00:55:13.956346   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 20/60
	I1101 00:55:14.958047   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 21/60
	I1101 00:55:15.959560   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 22/60
	I1101 00:55:16.961143   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 23/60
	I1101 00:55:17.962625   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 24/60
	I1101 00:55:18.964308   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 25/60
	I1101 00:55:19.966008   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 26/60
	I1101 00:55:20.967436   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 27/60
	I1101 00:55:21.969056   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 28/60
	I1101 00:55:22.970686   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 29/60
	I1101 00:55:23.972952   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 30/60
	I1101 00:55:24.974561   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 31/60
	I1101 00:55:25.976232   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 32/60
	I1101 00:55:26.978104   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 33/60
	I1101 00:55:27.979820   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 34/60
	I1101 00:55:28.981447   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 35/60
	I1101 00:55:29.983258   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 36/60
	I1101 00:55:30.984635   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 37/60
	I1101 00:55:31.986174   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 38/60
	I1101 00:55:32.987580   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 39/60
	I1101 00:55:33.989491   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 40/60
	I1101 00:55:34.991074   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 41/60
	I1101 00:55:35.992565   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 42/60
	I1101 00:55:36.994261   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 43/60
	I1101 00:55:37.995926   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 44/60
	I1101 00:55:38.997764   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 45/60
	I1101 00:55:39.999188   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 46/60
	I1101 00:55:41.000871   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 47/60
	I1101 00:55:42.002645   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 48/60
	I1101 00:55:43.004613   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 49/60
	I1101 00:55:44.006624   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 50/60
	I1101 00:55:45.008044   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 51/60
	I1101 00:55:46.009448   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 52/60
	I1101 00:55:47.011095   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 53/60
	I1101 00:55:48.012591   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 54/60
	I1101 00:55:49.014141   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 55/60
	I1101 00:55:50.016018   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 56/60
	I1101 00:55:51.017389   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 57/60
	I1101 00:55:52.018979   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 58/60
	I1101 00:55:53.020519   58177 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for machine to stop 59/60
	I1101 00:55:54.021166   58177 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1101 00:55:54.021210   58177 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1101 00:55:54.023498   58177 out.go:177] 
	W1101 00:55:54.025198   58177 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1101 00:55:54.025228   58177 out.go:239] * 
	* 
	W1101 00:55:54.027512   58177 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 00:55:54.029265   58177 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-639310 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-639310 -n default-k8s-diff-port-639310
E1101 00:55:54.076586   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/bridge-090856/client.crt: no such file or directory
E1101 00:55:55.356926   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/bridge-090856/client.crt: no such file or directory
E1101 00:55:57.918100   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/bridge-090856/client.crt: no such file or directory
E1101 00:55:58.160817   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/flannel-090856/client.crt: no such file or directory
E1101 00:56:03.038708   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/bridge-090856/client.crt: no such file or directory
E1101 00:56:08.401505   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/flannel-090856/client.crt: no such file or directory
E1101 00:56:11.278390   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/enable-default-cni-090856/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-639310 -n default-k8s-diff-port-639310: exit status 3 (18.55783728s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 00:56:12.588329   58956 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.97:22: connect: no route to host
	E1101 00:56:12.588349   58956 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.97:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-639310" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-008483 -n no-preload-008483
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-008483 -n no-preload-008483: exit status 3 (3.195824881s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 00:55:06.668239   58467 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.140:22: connect: no route to host
	E1101 00:55:06.668263   58467 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.140:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-008483 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-008483 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152909731s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.140:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-008483 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-008483 -n no-preload-008483
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-008483 -n no-preload-008483: exit status 3 (3.063054652s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 00:55:15.884345   58574 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.140:22: connect: no route to host
	E1101 00:55:15.884367   58574 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.140:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-008483" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-754132 -n embed-certs-754132
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-754132 -n embed-certs-754132: exit status 3 (3.199946941s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 00:55:08.972268   58496 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.83:22: connect: no route to host
	E1101 00:55:08.972290   58496 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.83:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-754132 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1101 00:55:11.541385   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/custom-flannel-090856/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-754132 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153880895s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.83:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-754132 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-754132 -n embed-certs-754132
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-754132 -n embed-certs-754132: exit status 3 (3.06343243s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 00:55:18.188322   58646 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.83:22: connect: no route to host
	E1101 00:55:18.188340   58646 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.83:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-754132" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-330042 -n old-k8s-version-330042
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-330042 -n old-k8s-version-330042: exit status 3 (3.201667159s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 00:55:18.188340   58616 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.90:22: connect: no route to host
	E1101 00:55:18.188353   58616 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.90:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-330042 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-330042 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.151652254s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.90:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-330042 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-330042 -n old-k8s-version-330042
E1101 00:55:27.281323   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/kindnet-090856/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-330042 -n old-k8s-version-330042: exit status 3 (3.062887023s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 00:55:27.404298   58792 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.90:22: connect: no route to host
	E1101 00:55:27.404321   58792 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.90:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-330042" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-639310 -n default-k8s-diff-port-639310
E1101 00:56:12.982153   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/custom-flannel-090856/client.crt: no such file or directory
E1101 00:56:13.279678   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/bridge-090856/client.crt: no such file or directory
E1101 00:56:14.122162   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/auto-090856/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-639310 -n default-k8s-diff-port-639310: exit status 3 (3.16744746s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 00:56:15.756276   59037 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.97:22: connect: no route to host
	E1101 00:56:15.756309   59037 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.97:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-639310 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1101 00:56:16.290830   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/calico-090856/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-639310 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153529463s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.97:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-639310 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-639310 -n default-k8s-diff-port-639310
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-639310 -n default-k8s-diff-port-639310: exit status 3 (3.062066344s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 00:56:24.972359   59107 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.97:22: connect: no route to host
	E1101 00:56:24.972382   59107 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.97:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-639310" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1101 01:05:47.920381   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/flannel-090856/client.crt: no such file or directory
E1101 01:05:52.798864   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/bridge-090856/client.crt: no such file or directory
E1101 01:06:14.121946   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/auto-090856/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-754132 -n embed-certs-754132
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-11-01 01:14:41.759145726 +0000 UTC m=+5455.323727723
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-754132 -n embed-certs-754132
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-754132 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-754132 logs -n 25: (1.65167692s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p flannel-090856 sudo                                 | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | containerd config dump                                 |                              |         |                |                     |                     |
	| ssh     | -p flannel-090856 sudo                                 | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | systemctl status crio --all                            |                              |         |                |                     |                     |
	|         | --full --no-pager                                      |                              |         |                |                     |                     |
	| ssh     | -p flannel-090856 sudo                                 | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |                |                     |                     |
	| start   | -p embed-certs-754132                                  | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:52 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| ssh     | -p flannel-090856 sudo find                            | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |                |                     |                     |
	| ssh     | -p flannel-090856 sudo crio                            | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | config                                                 |                              |         |                |                     |                     |
	| delete  | -p flannel-090856                                      | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	| delete  | -p                                                     | disable-driver-mounts-130996 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | disable-driver-mounts-130996                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:53 UTC |
	|         | default-k8s-diff-port-639310                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-008483             | no-preload-008483            | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC | 01 Nov 23 00:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-008483                                   | no-preload-008483            | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-754132            | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC | 01 Nov 23 00:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-754132                                  | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-330042        | old-k8s-version-330042       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC | 01 Nov 23 00:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-330042                              | old-k8s-version-330042       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-639310  | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:53 UTC | 01 Nov 23 00:53 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:53 UTC |                     |
	|         | default-k8s-diff-port-639310                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-008483                  | no-preload-008483            | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-754132                 | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-008483                                   | no-preload-008483            | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC | 01 Nov 23 01:06 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| start   | -p embed-certs-754132                                  | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC | 01 Nov 23 01:05 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-330042             | old-k8s-version-330042       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-330042                              | old-k8s-version-330042       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC | 01 Nov 23 01:07 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-639310       | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:56 UTC | 01 Nov 23 01:06 UTC |
	|         | default-k8s-diff-port-639310                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/01 00:56:25
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 00:56:25.029853   59148 out.go:296] Setting OutFile to fd 1 ...
	I1101 00:56:25.030119   59148 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:56:25.030128   59148 out.go:309] Setting ErrFile to fd 2...
	I1101 00:56:25.030133   59148 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:56:25.030311   59148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7305/.minikube/bin
	I1101 00:56:25.030856   59148 out.go:303] Setting JSON to false
	I1101 00:56:25.031741   59148 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5930,"bootTime":1698794255,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 00:56:25.031805   59148 start.go:138] virtualization: kvm guest
	I1101 00:56:25.034341   59148 out.go:177] * [default-k8s-diff-port-639310] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1101 00:56:25.036261   59148 out.go:177]   - MINIKUBE_LOCATION=17486
	I1101 00:56:25.037829   59148 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 00:56:25.036294   59148 notify.go:220] Checking for updates...
	I1101 00:56:25.041068   59148 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 00:56:25.042691   59148 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7305/.minikube
	I1101 00:56:25.044204   59148 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 00:56:25.045719   59148 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 00:56:25.047781   59148 config.go:182] Loaded profile config "default-k8s-diff-port-639310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:56:25.048183   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 00:56:25.048245   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:56:25.062714   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34345
	I1101 00:56:25.063108   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:56:25.063662   59148 main.go:141] libmachine: Using API Version  1
	I1101 00:56:25.063682   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:56:25.064083   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:56:25.064302   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 00:56:25.064571   59148 driver.go:378] Setting default libvirt URI to qemu:///system
	I1101 00:56:25.064917   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 00:56:25.064958   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:56:25.079214   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46451
	I1101 00:56:25.079576   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:56:25.080090   59148 main.go:141] libmachine: Using API Version  1
	I1101 00:56:25.080115   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:56:25.080419   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:56:25.080616   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 00:56:25.119015   59148 out.go:177] * Using the kvm2 driver based on existing profile
	I1101 00:56:25.120650   59148 start.go:298] selected driver: kvm2
	I1101 00:56:25.120670   59148 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-639310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-639310 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.97 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:56:25.120819   59148 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 00:56:25.121515   59148 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:56:25.121580   59148 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17486-7305/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1101 00:56:25.137482   59148 install.go:137] /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1101 00:56:25.137885   59148 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 00:56:25.137962   59148 cni.go:84] Creating CNI manager for ""
	I1101 00:56:25.137976   59148 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 00:56:25.137988   59148 start_flags.go:323] config:
	{Name:default-k8s-diff-port-639310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-63931
0 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.97 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:56:25.138186   59148 iso.go:125] acquiring lock: {Name:mk1f649ca0b7c1ae293cd66cb85f9eeda028b20b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:56:25.140405   59148 out.go:177] * Starting control plane node default-k8s-diff-port-639310 in cluster default-k8s-diff-port-639310
	I1101 00:56:25.141855   59148 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 00:56:25.141918   59148 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1101 00:56:25.141935   59148 cache.go:56] Caching tarball of preloaded images
	I1101 00:56:25.142048   59148 preload.go:174] Found /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 00:56:25.142066   59148 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1101 00:56:25.142204   59148 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/config.json ...
	I1101 00:56:25.142449   59148 start.go:365] acquiring machines lock for default-k8s-diff-port-639310: {Name:mk7aad88408c319111b9be8e59d9593a9e88374b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 00:56:26.060176   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:56:29.132322   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:56:35.212221   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:56:38.284225   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:56:44.364219   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:56:47.436224   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:56:53.516201   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:56:56.588256   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:02.668213   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:05.740252   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:11.820242   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:14.892259   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:20.972213   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:24.044181   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:30.124291   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:33.196239   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:39.276183   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:42.348235   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:48.428230   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:51.500275   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:57.580250   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:00.652208   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:06.732207   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:09.804251   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:15.884265   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:18.956206   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:25.040217   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:28.108288   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:34.188238   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:37.260268   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:43.340210   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:46.412248   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:52.492221   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:55.564188   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:01.644193   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:04.716194   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:10.796265   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:13.868226   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:19.948219   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:23.020283   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:29.100251   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:32.172268   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:38.252219   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:41.324223   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:47.404323   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:50.476273   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:53.480339   58730 start.go:369] acquired machines lock for "embed-certs-754132" in 4m35.118425724s
	I1101 00:59:53.480387   58730 start.go:96] Skipping create...Using existing machine configuration
	I1101 00:59:53.480393   58730 fix.go:54] fixHost starting: 
	I1101 00:59:53.480707   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 00:59:53.480737   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:59:53.495582   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34891
	I1101 00:59:53.495998   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:59:53.496445   58730 main.go:141] libmachine: Using API Version  1
	I1101 00:59:53.496466   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:59:53.496844   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:59:53.497017   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 00:59:53.497171   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetState
	I1101 00:59:53.498937   58730 fix.go:102] recreateIfNeeded on embed-certs-754132: state=Stopped err=<nil>
	I1101 00:59:53.498956   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	W1101 00:59:53.499128   58730 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 00:59:53.500909   58730 out.go:177] * Restarting existing kvm2 VM for "embed-certs-754132" ...
	I1101 00:59:53.478140   58676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 00:59:53.478177   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 00:59:53.480187   58676 machine.go:91] provisioned docker machine in 4m37.408348367s
	I1101 00:59:53.480232   58676 fix.go:56] fixHost completed within 4m37.430154401s
	I1101 00:59:53.480241   58676 start.go:83] releasing machines lock for "no-preload-008483", held for 4m37.430178737s
	W1101 00:59:53.480270   58676 start.go:691] error starting host: provision: host is not running
	W1101 00:59:53.480361   58676 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1101 00:59:53.480371   58676 start.go:706] Will try again in 5 seconds ...
	I1101 00:59:53.502467   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Start
	I1101 00:59:53.502656   58730 main.go:141] libmachine: (embed-certs-754132) Ensuring networks are active...
	I1101 00:59:53.503633   58730 main.go:141] libmachine: (embed-certs-754132) Ensuring network default is active
	I1101 00:59:53.504036   58730 main.go:141] libmachine: (embed-certs-754132) Ensuring network mk-embed-certs-754132 is active
	I1101 00:59:53.504557   58730 main.go:141] libmachine: (embed-certs-754132) Getting domain xml...
	I1101 00:59:53.505302   58730 main.go:141] libmachine: (embed-certs-754132) Creating domain...
	I1101 00:59:54.749625   58730 main.go:141] libmachine: (embed-certs-754132) Waiting to get IP...
	I1101 00:59:54.750551   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:54.750924   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:54.751002   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:54.750917   59675 retry.go:31] will retry after 295.652358ms: waiting for machine to come up
	I1101 00:59:55.048450   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:55.048884   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:55.048910   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:55.048845   59675 retry.go:31] will retry after 335.376353ms: waiting for machine to come up
	I1101 00:59:55.385612   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:55.385959   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:55.386000   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:55.385952   59675 retry.go:31] will retry after 353.381783ms: waiting for machine to come up
	I1101 00:59:55.740456   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:55.740943   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:55.740979   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:55.740874   59675 retry.go:31] will retry after 417.863733ms: waiting for machine to come up
	I1101 00:59:56.160773   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:56.161271   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:56.161298   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:56.161236   59675 retry.go:31] will retry after 659.454883ms: waiting for machine to come up
	I1101 00:59:56.822139   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:56.822551   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:56.822573   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:56.822511   59675 retry.go:31] will retry after 627.06089ms: waiting for machine to come up
	I1101 00:59:57.451254   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:57.451659   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:57.451687   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:57.451624   59675 retry.go:31] will retry after 1.095096876s: waiting for machine to come up
	I1101 00:59:58.481145   58676 start.go:365] acquiring machines lock for no-preload-008483: {Name:mk7aad88408c319111b9be8e59d9593a9e88374b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 00:59:58.548870   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:58.549359   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:58.549410   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:58.549323   59675 retry.go:31] will retry after 1.133377858s: waiting for machine to come up
	I1101 00:59:59.684741   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:59.685182   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:59.685205   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:59.685149   59675 retry.go:31] will retry after 1.332824718s: waiting for machine to come up
	I1101 01:00:01.019662   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:01.020166   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 01:00:01.020217   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 01:00:01.020119   59675 retry.go:31] will retry after 1.62664347s: waiting for machine to come up
	I1101 01:00:02.649017   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:02.649459   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 01:00:02.649490   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 01:00:02.649404   59675 retry.go:31] will retry after 2.043788133s: waiting for machine to come up
	I1101 01:00:04.695225   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:04.695657   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 01:00:04.695711   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 01:00:04.695640   59675 retry.go:31] will retry after 2.435347975s: waiting for machine to come up
	I1101 01:00:07.133078   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:07.133531   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 01:00:07.133567   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 01:00:07.133492   59675 retry.go:31] will retry after 2.768108097s: waiting for machine to come up
	I1101 01:00:09.903094   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:09.903460   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 01:00:09.903484   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 01:00:09.903424   59675 retry.go:31] will retry after 3.955575113s: waiting for machine to come up
	I1101 01:00:15.240546   58823 start.go:369] acquired machines lock for "old-k8s-version-330042" in 4m47.663537715s
	I1101 01:00:15.240608   58823 start.go:96] Skipping create...Using existing machine configuration
	I1101 01:00:15.240616   58823 fix.go:54] fixHost starting: 
	I1101 01:00:15.241087   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:00:15.241135   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:00:15.260921   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45157
	I1101 01:00:15.261342   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:00:15.261921   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:00:15.261954   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:00:15.262285   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:00:15.262488   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:00:15.262657   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetState
	I1101 01:00:15.264332   58823 fix.go:102] recreateIfNeeded on old-k8s-version-330042: state=Stopped err=<nil>
	I1101 01:00:15.264357   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	W1101 01:00:15.264541   58823 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 01:00:15.266960   58823 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-330042" ...
	I1101 01:00:13.860184   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.860818   58730 main.go:141] libmachine: (embed-certs-754132) Found IP for machine: 192.168.61.83
	I1101 01:00:13.860849   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has current primary IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.860866   58730 main.go:141] libmachine: (embed-certs-754132) Reserving static IP address...
	I1101 01:00:13.861321   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "embed-certs-754132", mac: "52:54:00:5e:2f:dd", ip: "192.168.61.83"} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:13.861350   58730 main.go:141] libmachine: (embed-certs-754132) Reserved static IP address: 192.168.61.83
	I1101 01:00:13.861362   58730 main.go:141] libmachine: (embed-certs-754132) DBG | skip adding static IP to network mk-embed-certs-754132 - found existing host DHCP lease matching {name: "embed-certs-754132", mac: "52:54:00:5e:2f:dd", ip: "192.168.61.83"}
	I1101 01:00:13.861372   58730 main.go:141] libmachine: (embed-certs-754132) Waiting for SSH to be available...
	I1101 01:00:13.861384   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Getting to WaitForSSH function...
	I1101 01:00:13.864760   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.865204   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:13.865232   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.865368   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Using SSH client type: external
	I1101 01:00:13.865408   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa (-rw-------)
	I1101 01:00:13.865434   58730 main.go:141] libmachine: (embed-certs-754132) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.83 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1101 01:00:13.865446   58730 main.go:141] libmachine: (embed-certs-754132) DBG | About to run SSH command:
	I1101 01:00:13.865454   58730 main.go:141] libmachine: (embed-certs-754132) DBG | exit 0
	I1101 01:00:13.964103   58730 main.go:141] libmachine: (embed-certs-754132) DBG | SSH cmd err, output: <nil>: 
	I1101 01:00:13.964444   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetConfigRaw
	I1101 01:00:13.965066   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetIP
	I1101 01:00:13.967463   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.967768   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:13.967791   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.968100   58730 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/config.json ...
	I1101 01:00:13.968294   58730 machine.go:88] provisioning docker machine ...
	I1101 01:00:13.968312   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:00:13.968530   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetMachineName
	I1101 01:00:13.968707   58730 buildroot.go:166] provisioning hostname "embed-certs-754132"
	I1101 01:00:13.968728   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetMachineName
	I1101 01:00:13.968901   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:13.971288   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.971637   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:13.971676   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.971792   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:13.972000   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:13.972181   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:13.972312   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:13.972476   58730 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:13.972798   58730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I1101 01:00:13.972812   58730 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-754132 && echo "embed-certs-754132" | sudo tee /etc/hostname
	I1101 01:00:14.121000   58730 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-754132
	
	I1101 01:00:14.121036   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:14.124379   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.124813   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:14.124840   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.125085   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:14.125339   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:14.125667   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:14.125832   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:14.126091   58730 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:14.126401   58730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I1101 01:00:14.126418   58730 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-754132' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-754132/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-754132' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 01:00:14.268155   58730 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 01:00:14.268188   58730 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7305/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7305/.minikube}
	I1101 01:00:14.268210   58730 buildroot.go:174] setting up certificates
	I1101 01:00:14.268238   58730 provision.go:83] configureAuth start
	I1101 01:00:14.268255   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetMachineName
	I1101 01:00:14.268542   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetIP
	I1101 01:00:14.271516   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.271946   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:14.271984   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.272150   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:14.274610   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.275017   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:14.275054   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.275206   58730 provision.go:138] copyHostCerts
	I1101 01:00:14.275269   58730 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem, removing ...
	I1101 01:00:14.275282   58730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 01:00:14.275351   58730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem (1078 bytes)
	I1101 01:00:14.275442   58730 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem, removing ...
	I1101 01:00:14.275450   58730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 01:00:14.275475   58730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem (1123 bytes)
	I1101 01:00:14.275526   58730 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem, removing ...
	I1101 01:00:14.275533   58730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 01:00:14.275571   58730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem (1675 bytes)
	I1101 01:00:14.275616   58730 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem org=jenkins.embed-certs-754132 san=[192.168.61.83 192.168.61.83 localhost 127.0.0.1 minikube embed-certs-754132]
	I1101 01:00:14.494175   58730 provision.go:172] copyRemoteCerts
	I1101 01:00:14.494239   58730 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 01:00:14.494265   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:14.496921   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.497263   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:14.497310   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.497482   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:14.497748   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:14.497906   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:14.498052   58730 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa Username:docker}
	I1101 01:00:14.592739   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 01:00:14.614862   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1101 01:00:14.636483   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1101 01:00:14.658154   58730 provision.go:86] duration metric: configureAuth took 389.900669ms
	I1101 01:00:14.658179   58730 buildroot.go:189] setting minikube options for container-runtime
	I1101 01:00:14.658364   58730 config.go:182] Loaded profile config "embed-certs-754132": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:00:14.658478   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:14.661110   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.661450   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:14.661500   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.661667   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:14.661853   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:14.661997   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:14.662120   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:14.662279   58730 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:14.662573   58730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I1101 01:00:14.662589   58730 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 01:00:14.974481   58730 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 01:00:14.974505   58730 machine.go:91] provisioned docker machine in 1.006198078s
	I1101 01:00:14.974521   58730 start.go:300] post-start starting for "embed-certs-754132" (driver="kvm2")
	I1101 01:00:14.974534   58730 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 01:00:14.974556   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:00:14.974913   58730 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 01:00:14.974946   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:14.977485   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.977815   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:14.977846   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.977970   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:14.978146   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:14.978310   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:14.978470   58730 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa Username:docker}
	I1101 01:00:15.073889   58730 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 01:00:15.077710   58730 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 01:00:15.077734   58730 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/addons for local assets ...
	I1101 01:00:15.077791   58730 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/files for local assets ...
	I1101 01:00:15.077855   58730 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> 145042.pem in /etc/ssl/certs
	I1101 01:00:15.077961   58730 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 01:00:15.086567   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:00:15.107446   58730 start.go:303] post-start completed in 132.911351ms
	I1101 01:00:15.107468   58730 fix.go:56] fixHost completed within 21.627074953s
	I1101 01:00:15.107485   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:15.110070   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.110392   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:15.110426   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.110552   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:15.110748   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:15.110914   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:15.111078   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:15.111268   58730 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:15.111683   58730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I1101 01:00:15.111696   58730 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1101 01:00:15.240326   58730 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698800415.188118531
	
	I1101 01:00:15.240357   58730 fix.go:206] guest clock: 1698800415.188118531
	I1101 01:00:15.240365   58730 fix.go:219] Guest: 2023-11-01 01:00:15.188118531 +0000 UTC Remote: 2023-11-01 01:00:15.107470988 +0000 UTC m=+296.909935143 (delta=80.647543ms)
	I1101 01:00:15.240385   58730 fix.go:190] guest clock delta is within tolerance: 80.647543ms
	I1101 01:00:15.240420   58730 start.go:83] releasing machines lock for "embed-certs-754132", held for 21.760022516s
	I1101 01:00:15.240464   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:00:15.240736   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetIP
	I1101 01:00:15.243570   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.243905   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:15.243961   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.244163   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:00:15.244698   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:00:15.244872   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:00:15.244948   58730 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 01:00:15.245012   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:15.245063   58730 ssh_runner.go:195] Run: cat /version.json
	I1101 01:00:15.245089   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:15.247618   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.247886   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.247985   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:15.248018   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.248265   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:15.248358   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:15.248387   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.248422   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:15.248600   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:15.248601   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:15.248774   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:15.248765   58730 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa Username:docker}
	I1101 01:00:15.248913   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:15.249034   58730 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa Username:docker}
	I1101 01:00:15.383514   58730 ssh_runner.go:195] Run: systemctl --version
	I1101 01:00:15.389291   58730 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 01:00:15.531982   58730 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 01:00:15.537622   58730 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 01:00:15.537711   58730 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 01:00:15.554440   58730 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 01:00:15.554488   58730 start.go:472] detecting cgroup driver to use...
	I1101 01:00:15.554549   58730 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 01:00:15.569732   58730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 01:00:15.582752   58730 docker.go:204] disabling cri-docker service (if available) ...
	I1101 01:00:15.582795   58730 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 01:00:15.596221   58730 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 01:00:15.609815   58730 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 01:00:15.717679   58730 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 01:00:15.842128   58730 docker.go:220] disabling docker service ...
	I1101 01:00:15.842203   58730 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 01:00:15.854613   58730 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 01:00:15.869487   58730 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 01:00:15.991107   58730 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 01:00:16.118392   58730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 01:00:16.131570   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 01:00:16.150691   58730 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 01:00:16.150755   58730 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:16.160081   58730 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 01:00:16.160171   58730 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:16.170277   58730 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:16.180469   58730 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:16.189966   58730 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 01:00:16.199465   58730 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 01:00:16.207995   58730 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 01:00:16.208057   58730 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 01:00:16.221491   58730 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 01:00:16.231855   58730 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 01:00:16.355227   58730 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 01:00:16.520341   58730 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 01:00:16.520403   58730 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 01:00:16.525071   58730 start.go:540] Will wait 60s for crictl version
	I1101 01:00:16.525143   58730 ssh_runner.go:195] Run: which crictl
	I1101 01:00:16.529138   58730 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 01:00:16.566007   58730 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1101 01:00:16.566082   58730 ssh_runner.go:195] Run: crio --version
	I1101 01:00:16.612652   58730 ssh_runner.go:195] Run: crio --version
	I1101 01:00:16.665668   58730 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1101 01:00:15.268389   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Start
	I1101 01:00:15.268575   58823 main.go:141] libmachine: (old-k8s-version-330042) Ensuring networks are active...
	I1101 01:00:15.269280   58823 main.go:141] libmachine: (old-k8s-version-330042) Ensuring network default is active
	I1101 01:00:15.269618   58823 main.go:141] libmachine: (old-k8s-version-330042) Ensuring network mk-old-k8s-version-330042 is active
	I1101 01:00:15.270056   58823 main.go:141] libmachine: (old-k8s-version-330042) Getting domain xml...
	I1101 01:00:15.270814   58823 main.go:141] libmachine: (old-k8s-version-330042) Creating domain...
	I1101 01:00:16.566526   58823 main.go:141] libmachine: (old-k8s-version-330042) Waiting to get IP...
	I1101 01:00:16.567713   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:16.568239   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:16.568336   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:16.568220   59797 retry.go:31] will retry after 200.046919ms: waiting for machine to come up
	I1101 01:00:16.769849   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:16.770436   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:16.770477   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:16.770427   59797 retry.go:31] will retry after 301.397937ms: waiting for machine to come up
	I1101 01:00:17.074180   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:17.074657   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:17.074689   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:17.074626   59797 retry.go:31] will retry after 462.511505ms: waiting for machine to come up
	I1101 01:00:16.667657   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetIP
	I1101 01:00:16.670756   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:16.671148   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:16.671216   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:16.671377   58730 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1101 01:00:16.675342   58730 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:00:16.687224   58730 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 01:00:16.687310   58730 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:00:16.726714   58730 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1101 01:00:16.726779   58730 ssh_runner.go:195] Run: which lz4
	I1101 01:00:16.730745   58730 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1101 01:00:16.734588   58730 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 01:00:16.734623   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1101 01:00:17.538840   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:17.539313   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:17.539337   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:17.539276   59797 retry.go:31] will retry after 562.894181ms: waiting for machine to come up
	I1101 01:00:18.104173   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:18.104678   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:18.104712   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:18.104641   59797 retry.go:31] will retry after 659.582768ms: waiting for machine to come up
	I1101 01:00:18.766319   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:18.766719   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:18.766749   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:18.766688   59797 retry.go:31] will retry after 626.783168ms: waiting for machine to come up
	I1101 01:00:19.395203   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:19.395693   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:19.395720   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:19.395651   59797 retry.go:31] will retry after 884.294618ms: waiting for machine to come up
	I1101 01:00:20.281677   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:20.282152   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:20.282176   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:20.282094   59797 retry.go:31] will retry after 997.794459ms: waiting for machine to come up
	I1101 01:00:21.281118   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:21.281568   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:21.281596   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:21.281525   59797 retry.go:31] will retry after 1.624252325s: waiting for machine to come up
	I1101 01:00:18.514400   58730 crio.go:444] Took 1.783693 seconds to copy over tarball
	I1101 01:00:18.514460   58730 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 01:00:21.481089   58730 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.966600648s)
	I1101 01:00:21.481118   58730 crio.go:451] Took 2.966695 seconds to extract the tarball
	I1101 01:00:21.481130   58730 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 01:00:21.520934   58730 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:00:21.568541   58730 crio.go:496] all images are preloaded for cri-o runtime.
	I1101 01:00:21.568569   58730 cache_images.go:84] Images are preloaded, skipping loading
	I1101 01:00:21.568638   58730 ssh_runner.go:195] Run: crio config
	I1101 01:00:21.626687   58730 cni.go:84] Creating CNI manager for ""
	I1101 01:00:21.626707   58730 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:00:21.626724   58730 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 01:00:21.626745   58730 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.83 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-754132 NodeName:embed-certs-754132 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.83"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.83 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 01:00:21.626906   58730 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.83
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-754132"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.83
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.83"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 01:00:21.627000   58730 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-754132 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.83
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:embed-certs-754132 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 01:00:21.627062   58730 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 01:00:21.635965   58730 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 01:00:21.636048   58730 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 01:00:21.644318   58730 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1101 01:00:21.659722   58730 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 01:00:21.674541   58730 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1101 01:00:21.690451   58730 ssh_runner.go:195] Run: grep 192.168.61.83	control-plane.minikube.internal$ /etc/hosts
	I1101 01:00:21.694013   58730 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.83	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:00:21.705929   58730 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132 for IP: 192.168.61.83
	I1101 01:00:21.705978   58730 certs.go:190] acquiring lock for shared ca certs: {Name:mk6185b3938c4b51e80f57b7c81926c81b632b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:00:21.706152   58730 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key
	I1101 01:00:21.706193   58730 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key
	I1101 01:00:21.706255   58730 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/client.key
	I1101 01:00:21.706321   58730 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/apiserver.key.00ce3257
	I1101 01:00:21.706365   58730 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/proxy-client.key
	I1101 01:00:21.706507   58730 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem (1338 bytes)
	W1101 01:00:21.706541   58730 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504_empty.pem, impossibly tiny 0 bytes
	I1101 01:00:21.706552   58730 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 01:00:21.706580   58730 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem (1078 bytes)
	I1101 01:00:21.706606   58730 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem (1123 bytes)
	I1101 01:00:21.706633   58730 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem (1675 bytes)
	I1101 01:00:21.706670   58730 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:00:21.707263   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 01:00:21.734199   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 01:00:21.760230   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 01:00:21.787083   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 01:00:21.810498   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 01:00:21.833905   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 01:00:21.859073   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 01:00:21.881222   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 01:00:21.904432   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem --> /usr/share/ca-certificates/14504.pem (1338 bytes)
	I1101 01:00:21.934873   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /usr/share/ca-certificates/145042.pem (1708 bytes)
	I1101 01:00:21.958353   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 01:00:21.981353   58730 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 01:00:21.997436   58730 ssh_runner.go:195] Run: openssl version
	I1101 01:00:22.003487   58730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14504.pem && ln -fs /usr/share/ca-certificates/14504.pem /etc/ssl/certs/14504.pem"
	I1101 01:00:22.013829   58730 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14504.pem
	I1101 01:00:22.018482   58730 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:54 /usr/share/ca-certificates/14504.pem
	I1101 01:00:22.018554   58730 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14504.pem
	I1101 01:00:22.024695   58730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14504.pem /etc/ssl/certs/51391683.0"
	I1101 01:00:22.034956   58730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145042.pem && ln -fs /usr/share/ca-certificates/145042.pem /etc/ssl/certs/145042.pem"
	I1101 01:00:22.046182   58730 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145042.pem
	I1101 01:00:22.051197   58730 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:54 /usr/share/ca-certificates/145042.pem
	I1101 01:00:22.051273   58730 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145042.pem
	I1101 01:00:22.057145   58730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145042.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 01:00:22.067337   58730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 01:00:22.077300   58730 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:00:22.081973   58730 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:00:22.082025   58730 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:00:22.087341   58730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 01:00:22.097021   58730 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 01:00:22.101801   58730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 01:00:22.107498   58730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 01:00:22.113187   58730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 01:00:22.119281   58730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 01:00:22.125109   58730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 01:00:22.130878   58730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 01:00:22.136711   58730 kubeadm.go:404] StartCluster: {Name:embed-certs-754132 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:embed-certs-754132 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.83 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 01:00:22.136843   58730 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 01:00:22.136898   58730 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:00:22.172188   58730 cri.go:89] found id: ""
	I1101 01:00:22.172267   58730 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 01:00:22.181863   58730 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1101 01:00:22.181901   58730 kubeadm.go:636] restartCluster start
	I1101 01:00:22.181962   58730 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 01:00:22.190970   58730 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:22.192108   58730 kubeconfig.go:92] found "embed-certs-754132" server: "https://192.168.61.83:8443"
	I1101 01:00:22.194633   58730 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 01:00:22.203708   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:22.203792   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:22.214867   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:22.214889   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:22.214972   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:22.225940   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:22.726677   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:22.726769   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:22.737874   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:23.226416   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:23.226492   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:23.237902   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:22.907053   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:22.907532   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:22.907563   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:22.907487   59797 retry.go:31] will retry after 2.170221456s: waiting for machine to come up
	I1101 01:00:25.079354   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:25.079791   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:25.079831   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:25.079754   59797 retry.go:31] will retry after 2.279141994s: waiting for machine to come up
	I1101 01:00:27.361955   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:27.362423   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:27.362456   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:27.362368   59797 retry.go:31] will retry after 2.772425742s: waiting for machine to come up
	I1101 01:00:23.726108   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:23.726179   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:23.737404   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:24.226007   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:24.226178   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:24.237401   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:24.727058   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:24.727152   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:24.742704   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:25.226166   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:25.226272   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:25.237808   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:25.726161   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:25.726244   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:25.737763   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:26.226321   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:26.226485   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:26.239919   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:26.726488   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:26.726596   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:26.740719   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:27.226157   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:27.226268   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:27.240719   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:27.726272   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:27.726360   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:27.738068   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:28.226882   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:28.226954   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:28.239208   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:30.136893   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:30.137311   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:30.137333   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:30.137274   59797 retry.go:31] will retry after 4.191062934s: waiting for machine to come up
	I1101 01:00:28.726726   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:28.726845   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:28.737955   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:29.226410   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:29.226475   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:29.237886   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:29.726367   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:29.726461   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:29.737767   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:30.226294   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:30.226389   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:30.237767   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:30.726295   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:30.726363   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:30.737691   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:31.226274   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:31.226343   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:31.237801   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:31.726297   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:31.726366   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:31.738060   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:32.204696   58730 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1101 01:00:32.204729   58730 kubeadm.go:1128] stopping kube-system containers ...
	I1101 01:00:32.204741   58730 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 01:00:32.204792   58730 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:00:32.241943   58730 cri.go:89] found id: ""
	I1101 01:00:32.242012   58730 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 01:00:32.256657   58730 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:00:32.265087   58730 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:00:32.265159   58730 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:00:32.273631   58730 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1101 01:00:32.273654   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:32.379073   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:35.634014   59148 start.go:369] acquired machines lock for "default-k8s-diff-port-639310" in 4m10.491521982s
	I1101 01:00:35.634070   59148 start.go:96] Skipping create...Using existing machine configuration
	I1101 01:00:35.634078   59148 fix.go:54] fixHost starting: 
	I1101 01:00:35.634533   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:00:35.634577   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:00:35.654259   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46439
	I1101 01:00:35.654746   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:00:35.655216   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:00:35.655245   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:00:35.655578   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:00:35.655759   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:00:35.655905   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetState
	I1101 01:00:35.657604   59148 fix.go:102] recreateIfNeeded on default-k8s-diff-port-639310: state=Stopped err=<nil>
	I1101 01:00:35.657646   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	W1101 01:00:35.657804   59148 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 01:00:35.660028   59148 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-639310" ...
	I1101 01:00:34.332963   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.333486   58823 main.go:141] libmachine: (old-k8s-version-330042) Found IP for machine: 192.168.39.90
	I1101 01:00:34.333518   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has current primary IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.333529   58823 main.go:141] libmachine: (old-k8s-version-330042) Reserving static IP address...
	I1101 01:00:34.333853   58823 main.go:141] libmachine: (old-k8s-version-330042) Reserved static IP address: 192.168.39.90
	I1101 01:00:34.333874   58823 main.go:141] libmachine: (old-k8s-version-330042) Waiting for SSH to be available...
	I1101 01:00:34.333901   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "old-k8s-version-330042", mac: "52:54:00:a2:40:80", ip: "192.168.39.90"} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.333932   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | skip adding static IP to network mk-old-k8s-version-330042 - found existing host DHCP lease matching {name: "old-k8s-version-330042", mac: "52:54:00:a2:40:80", ip: "192.168.39.90"}
	I1101 01:00:34.333954   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Getting to WaitForSSH function...
	I1101 01:00:34.335871   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.336238   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.336275   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.336409   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Using SSH client type: external
	I1101 01:00:34.336446   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa (-rw-------)
	I1101 01:00:34.336480   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.90 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1101 01:00:34.336501   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | About to run SSH command:
	I1101 01:00:34.336523   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | exit 0
	I1101 01:00:34.431938   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | SSH cmd err, output: <nil>: 
	I1101 01:00:34.432324   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetConfigRaw
	I1101 01:00:34.433070   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetIP
	I1101 01:00:34.435967   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.436402   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.436434   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.436696   58823 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/config.json ...
	I1101 01:00:34.436886   58823 machine.go:88] provisioning docker machine ...
	I1101 01:00:34.436903   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:00:34.437136   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetMachineName
	I1101 01:00:34.437299   58823 buildroot.go:166] provisioning hostname "old-k8s-version-330042"
	I1101 01:00:34.437323   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetMachineName
	I1101 01:00:34.437508   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:34.439785   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.440175   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.440215   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.440316   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:34.440481   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:34.440662   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:34.440800   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:34.440965   58823 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:34.441440   58823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1101 01:00:34.441461   58823 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-330042 && echo "old-k8s-version-330042" | sudo tee /etc/hostname
	I1101 01:00:34.590132   58823 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-330042
	
	I1101 01:00:34.590168   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:34.593018   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.593457   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.593521   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.593623   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:34.593817   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:34.594004   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:34.594151   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:34.594317   58823 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:34.594622   58823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1101 01:00:34.594640   58823 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-330042' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-330042/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-330042' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 01:00:34.743448   58823 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 01:00:34.743485   58823 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7305/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7305/.minikube}
	I1101 01:00:34.743510   58823 buildroot.go:174] setting up certificates
	I1101 01:00:34.743530   58823 provision.go:83] configureAuth start
	I1101 01:00:34.743545   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetMachineName
	I1101 01:00:34.743848   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetIP
	I1101 01:00:34.746932   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.747302   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.747333   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.747478   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:34.749794   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.750154   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.750185   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.750339   58823 provision.go:138] copyHostCerts
	I1101 01:00:34.750412   58823 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem, removing ...
	I1101 01:00:34.750435   58823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 01:00:34.750504   58823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem (1078 bytes)
	I1101 01:00:34.750620   58823 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem, removing ...
	I1101 01:00:34.750628   58823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 01:00:34.750655   58823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem (1123 bytes)
	I1101 01:00:34.750726   58823 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem, removing ...
	I1101 01:00:34.750736   58823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 01:00:34.750761   58823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem (1675 bytes)
	I1101 01:00:34.750820   58823 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-330042 san=[192.168.39.90 192.168.39.90 localhost 127.0.0.1 minikube old-k8s-version-330042]
	I1101 01:00:34.819269   58823 provision.go:172] copyRemoteCerts
	I1101 01:00:34.819327   58823 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 01:00:34.819354   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:34.822409   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.822852   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.822887   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.823101   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:34.823335   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:34.823520   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:34.823688   58823 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa Username:docker}
	I1101 01:00:34.928534   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 01:00:34.955140   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1101 01:00:34.982361   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 01:00:35.007980   58823 provision.go:86] duration metric: configureAuth took 264.432358ms
	I1101 01:00:35.008007   58823 buildroot.go:189] setting minikube options for container-runtime
	I1101 01:00:35.008317   58823 config.go:182] Loaded profile config "old-k8s-version-330042": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1101 01:00:35.008450   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:35.011424   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.011790   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:35.011820   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.012054   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:35.012305   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:35.012505   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:35.012692   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:35.012898   58823 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:35.013292   58823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1101 01:00:35.013310   58823 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 01:00:35.345179   58823 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 01:00:35.345210   58823 machine.go:91] provisioned docker machine in 908.310008ms
	I1101 01:00:35.345224   58823 start.go:300] post-start starting for "old-k8s-version-330042" (driver="kvm2")
	I1101 01:00:35.345236   58823 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 01:00:35.345283   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:00:35.345634   58823 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 01:00:35.345666   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:35.348576   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.348945   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:35.348978   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.349171   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:35.349364   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:35.349527   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:35.349672   58823 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa Username:docker}
	I1101 01:00:35.448239   58823 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 01:00:35.453459   58823 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 01:00:35.453495   58823 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/addons for local assets ...
	I1101 01:00:35.453589   58823 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/files for local assets ...
	I1101 01:00:35.453705   58823 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> 145042.pem in /etc/ssl/certs
	I1101 01:00:35.453819   58823 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 01:00:35.464658   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:00:35.488669   58823 start.go:303] post-start completed in 143.429717ms
	I1101 01:00:35.488699   58823 fix.go:56] fixHost completed within 20.248082329s
	I1101 01:00:35.488723   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:35.491535   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.491917   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:35.491962   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.492108   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:35.492302   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:35.492472   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:35.492610   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:35.492777   58823 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:35.493085   58823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1101 01:00:35.493097   58823 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1101 01:00:35.633831   58823 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698800435.580601462
	
	I1101 01:00:35.633860   58823 fix.go:206] guest clock: 1698800435.580601462
	I1101 01:00:35.633872   58823 fix.go:219] Guest: 2023-11-01 01:00:35.580601462 +0000 UTC Remote: 2023-11-01 01:00:35.488703086 +0000 UTC m=+308.076532844 (delta=91.898376ms)
	I1101 01:00:35.633899   58823 fix.go:190] guest clock delta is within tolerance: 91.898376ms
	I1101 01:00:35.633906   58823 start.go:83] releasing machines lock for "old-k8s-version-330042", held for 20.393324923s
	I1101 01:00:35.633937   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:00:35.634276   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetIP
	I1101 01:00:35.637052   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.637411   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:35.637462   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.637668   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:00:35.638239   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:00:35.638479   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:00:35.638661   58823 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 01:00:35.638703   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:35.638792   58823 ssh_runner.go:195] Run: cat /version.json
	I1101 01:00:35.638813   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:35.641913   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:35.641919   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.642071   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:35.642094   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.642106   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.642151   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:35.642323   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:35.642517   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:35.642547   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.642608   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:35.642640   58823 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa Username:docker}
	I1101 01:00:35.642736   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:35.642872   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:35.642994   58823 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa Username:docker}
	I1101 01:00:35.772469   58823 ssh_runner.go:195] Run: systemctl --version
	I1101 01:00:35.778377   58823 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 01:00:35.930189   58823 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 01:00:35.937481   58823 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 01:00:35.937583   58823 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 01:00:35.959054   58823 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 01:00:35.959081   58823 start.go:472] detecting cgroup driver to use...
	I1101 01:00:35.959166   58823 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 01:00:35.978338   58823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 01:00:35.994627   58823 docker.go:204] disabling cri-docker service (if available) ...
	I1101 01:00:35.994690   58823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 01:00:36.010212   58823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 01:00:36.025616   58823 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 01:00:36.132484   58823 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 01:00:36.266531   58823 docker.go:220] disabling docker service ...
	I1101 01:00:36.266604   58823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 01:00:36.280303   58823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 01:00:36.291905   58823 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 01:00:36.413114   58823 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 01:00:36.527297   58823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 01:00:36.540547   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 01:00:36.561997   58823 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1101 01:00:36.562070   58823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:36.574735   58823 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 01:00:36.574809   58823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:36.584015   58823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:36.592896   58823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:36.602199   58823 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 01:00:36.611742   58823 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 01:00:36.620073   58823 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 01:00:36.620140   58823 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 01:00:36.633237   58823 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 01:00:36.641679   58823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 01:00:36.786323   58823 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 01:00:37.011240   58823 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 01:00:37.011332   58823 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 01:00:37.016349   58823 start.go:540] Will wait 60s for crictl version
	I1101 01:00:37.016417   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:37.020952   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 01:00:37.068566   58823 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1101 01:00:37.068649   58823 ssh_runner.go:195] Run: crio --version
	I1101 01:00:37.119257   58823 ssh_runner.go:195] Run: crio --version
	I1101 01:00:37.170471   58823 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1101 01:00:37.172128   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetIP
	I1101 01:00:37.175116   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:37.175552   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:37.175583   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:37.175834   58823 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1101 01:00:37.179970   58823 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:00:37.193466   58823 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1101 01:00:37.193550   58823 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:00:37.239780   58823 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1101 01:00:37.239851   58823 ssh_runner.go:195] Run: which lz4
	I1101 01:00:37.243871   58823 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1101 01:00:37.248203   58823 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 01:00:37.248243   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1101 01:00:33.273385   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:33.468847   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:33.558663   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:33.632226   58730 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:00:33.632305   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:33.645291   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:34.159920   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:34.660339   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:35.159837   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:35.659362   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:36.159870   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:36.189698   58730 api_server.go:72] duration metric: took 2.557471176s to wait for apiserver process to appear ...
	I1101 01:00:36.189726   58730 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:00:36.189746   58730 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8443/healthz ...
	I1101 01:00:35.662001   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Start
	I1101 01:00:35.662248   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Ensuring networks are active...
	I1101 01:00:35.663075   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Ensuring network default is active
	I1101 01:00:35.663589   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Ensuring network mk-default-k8s-diff-port-639310 is active
	I1101 01:00:35.664066   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Getting domain xml...
	I1101 01:00:35.664780   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Creating domain...
	I1101 01:00:37.046385   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting to get IP...
	I1101 01:00:37.047592   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.048056   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.048160   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:37.048064   59967 retry.go:31] will retry after 244.19131ms: waiting for machine to come up
	I1101 01:00:37.293636   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.294421   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.294535   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:37.294483   59967 retry.go:31] will retry after 281.302105ms: waiting for machine to come up
	I1101 01:00:37.577271   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.577934   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.577962   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:37.577874   59967 retry.go:31] will retry after 376.713113ms: waiting for machine to come up
	I1101 01:00:37.956666   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.957154   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.957182   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:37.957125   59967 retry.go:31] will retry after 366.92844ms: waiting for machine to come up
	I1101 01:00:38.325741   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:38.326257   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:38.326291   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:38.326226   59967 retry.go:31] will retry after 478.435824ms: waiting for machine to come up
	I1101 01:00:38.806215   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:38.806928   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:38.806965   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:38.806904   59967 retry.go:31] will retry after 910.120665ms: waiting for machine to come up
	I1101 01:00:39.718641   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:39.719281   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:39.719307   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:39.719210   59967 retry.go:31] will retry after 1.017844602s: waiting for machine to come up
	I1101 01:00:40.636542   58730 api_server.go:279] https://192.168.61.83:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 01:00:40.636586   58730 api_server.go:103] status: https://192.168.61.83:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 01:00:40.636602   58730 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8443/healthz ...
	I1101 01:00:40.687211   58730 api_server.go:279] https://192.168.61.83:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 01:00:40.687258   58730 api_server.go:103] status: https://192.168.61.83:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 01:00:41.187988   58730 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8443/healthz ...
	I1101 01:00:41.197585   58730 api_server.go:279] https://192.168.61.83:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:00:41.197626   58730 api_server.go:103] status: https://192.168.61.83:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:00:41.688019   58730 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8443/healthz ...
	I1101 01:00:41.698406   58730 api_server.go:279] https://192.168.61.83:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:00:41.698439   58730 api_server.go:103] status: https://192.168.61.83:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:00:42.188141   58730 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8443/healthz ...
	I1101 01:00:42.195663   58730 api_server.go:279] https://192.168.61.83:8443/healthz returned 200:
	ok
	I1101 01:00:42.204715   58730 api_server.go:141] control plane version: v1.28.3
	I1101 01:00:42.204746   58730 api_server.go:131] duration metric: took 6.015012484s to wait for apiserver health ...
	I1101 01:00:42.204756   58730 cni.go:84] Creating CNI manager for ""
	I1101 01:00:42.204764   58730 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:00:42.206831   58730 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:00:38.979032   58823 crio.go:444] Took 1.735199 seconds to copy over tarball
	I1101 01:00:38.979127   58823 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 01:00:42.235526   58823 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.256363592s)
	I1101 01:00:42.235558   58823 crio.go:451] Took 3.256498 seconds to extract the tarball
	I1101 01:00:42.235592   58823 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 01:00:42.278508   58823 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:00:42.332199   58823 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1101 01:00:42.332225   58823 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1101 01:00:42.332323   58823 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:00:42.332383   58823 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1101 01:00:42.332425   58823 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1101 01:00:42.332445   58823 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1101 01:00:42.332394   58823 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1101 01:00:42.332554   58823 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1101 01:00:42.332552   58823 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1101 01:00:42.332611   58823 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1101 01:00:42.333952   58823 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1101 01:00:42.333965   58823 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1101 01:00:42.333971   58823 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1101 01:00:42.333973   58823 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:00:42.333951   58823 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1101 01:00:42.333959   58823 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1101 01:00:42.334015   58823 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1101 01:00:42.334422   58823 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1101 01:00:42.208425   58730 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:00:42.243672   58730 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:00:42.270472   58730 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:00:40.739283   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:40.739839   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:40.739871   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:40.739751   59967 retry.go:31] will retry after 924.830892ms: waiting for machine to come up
	I1101 01:00:41.666231   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:41.666922   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:41.666949   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:41.666878   59967 retry.go:31] will retry after 1.792434708s: waiting for machine to come up
	I1101 01:00:43.461158   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:43.461723   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:43.461758   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:43.461651   59967 retry.go:31] will retry after 1.458280506s: waiting for machine to come up
	I1101 01:00:44.921321   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:44.922072   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:44.922105   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:44.922018   59967 retry.go:31] will retry after 2.732488928s: waiting for machine to come up
	I1101 01:00:42.548949   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1101 01:00:42.549011   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1101 01:00:42.552787   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1101 01:00:42.554125   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1101 01:00:42.559301   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1101 01:00:42.560733   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1101 01:00:42.564609   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1101 01:00:42.857456   58823 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1101 01:00:42.857497   58823 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1101 01:00:42.857537   58823 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1101 01:00:42.857565   58823 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1101 01:00:42.857583   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:42.857502   58823 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1101 01:00:42.857597   58823 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1101 01:00:42.857644   58823 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1101 01:00:42.857703   58823 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1101 01:00:42.857733   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:42.857663   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:42.857666   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:42.880301   58823 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1101 01:00:42.880350   58823 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1101 01:00:42.880362   58823 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1101 01:00:42.880404   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:42.880421   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1101 01:00:42.880432   58823 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1101 01:00:42.880473   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:42.880475   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1101 01:00:42.880542   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1101 01:00:42.880377   58823 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1101 01:00:42.880587   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1101 01:00:42.880610   58823 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1101 01:00:42.880663   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:42.958449   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1101 01:00:42.975151   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1101 01:00:42.975188   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1101 01:00:42.979136   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1101 01:00:42.979198   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1101 01:00:42.979246   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1101 01:00:42.979306   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1101 01:00:43.059447   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1101 01:00:43.059470   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1101 01:00:43.059515   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1101 01:00:43.059572   58823 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1101 01:00:43.065313   58823 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1101 01:00:43.065337   58823 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1101 01:00:43.065397   58823 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1101 01:00:43.212775   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:00:44.821509   58823 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.756075689s)
	I1101 01:00:44.821542   58823 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1101 01:00:44.821600   58823 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.608800531s)
	I1101 01:00:44.821639   58823 cache_images.go:92] LoadImages completed in 2.489401317s
	W1101 01:00:44.821749   58823 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0: no such file or directory
	I1101 01:00:44.821888   58823 ssh_runner.go:195] Run: crio config
	I1101 01:00:44.911017   58823 cni.go:84] Creating CNI manager for ""
	I1101 01:00:44.911094   58823 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:00:44.911132   58823 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 01:00:44.911173   58823 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.90 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-330042 NodeName:old-k8s-version-330042 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1101 01:00:44.911365   58823 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-330042"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.90
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.90"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-330042
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.90:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 01:00:44.911510   58823 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-330042 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-330042 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 01:00:44.911601   58823 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1101 01:00:44.925733   58823 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 01:00:44.925810   58823 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 01:00:44.939166   58823 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1101 01:00:44.962847   58823 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 01:00:44.986855   58823 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I1101 01:00:45.011998   58823 ssh_runner.go:195] Run: grep 192.168.39.90	control-plane.minikube.internal$ /etc/hosts
	I1101 01:00:45.017160   58823 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.90	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:00:45.035826   58823 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042 for IP: 192.168.39.90
	I1101 01:00:45.035866   58823 certs.go:190] acquiring lock for shared ca certs: {Name:mk6185b3938c4b51e80f57b7c81926c81b632b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:00:45.036097   58823 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key
	I1101 01:00:45.036161   58823 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key
	I1101 01:00:45.036276   58823 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/client.key
	I1101 01:00:45.036363   58823 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/apiserver.key.05a13cdc
	I1101 01:00:45.036423   58823 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/proxy-client.key
	I1101 01:00:45.036600   58823 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem (1338 bytes)
	W1101 01:00:45.036642   58823 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504_empty.pem, impossibly tiny 0 bytes
	I1101 01:00:45.036657   58823 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 01:00:45.036697   58823 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem (1078 bytes)
	I1101 01:00:45.036734   58823 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem (1123 bytes)
	I1101 01:00:45.036769   58823 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem (1675 bytes)
	I1101 01:00:45.036837   58823 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:00:45.037808   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 01:00:45.071828   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 01:00:45.105069   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 01:00:45.136650   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 01:00:45.169633   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 01:00:45.202102   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 01:00:45.234227   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 01:00:45.265901   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 01:00:45.297720   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem --> /usr/share/ca-certificates/14504.pem (1338 bytes)
	I1101 01:00:45.330915   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /usr/share/ca-certificates/145042.pem (1708 bytes)
	I1101 01:00:45.361364   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 01:00:45.391023   58823 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 01:00:45.412643   58823 ssh_runner.go:195] Run: openssl version
	I1101 01:00:45.419938   58823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145042.pem && ln -fs /usr/share/ca-certificates/145042.pem /etc/ssl/certs/145042.pem"
	I1101 01:00:45.433972   58823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145042.pem
	I1101 01:00:45.439966   58823 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:54 /usr/share/ca-certificates/145042.pem
	I1101 01:00:45.440070   58823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145042.pem
	I1101 01:00:45.447248   58823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145042.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 01:00:45.461261   58823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 01:00:45.475166   58823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:00:45.481174   58823 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:00:45.481281   58823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:00:45.488190   58823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 01:00:45.502428   58823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14504.pem && ln -fs /usr/share/ca-certificates/14504.pem /etc/ssl/certs/14504.pem"
	I1101 01:00:45.515353   58823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14504.pem
	I1101 01:00:45.520135   58823 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:54 /usr/share/ca-certificates/14504.pem
	I1101 01:00:45.520196   58823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14504.pem
	I1101 01:00:45.525605   58823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14504.pem /etc/ssl/certs/51391683.0"
	I1101 01:00:45.535886   58823 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 01:00:45.540671   58823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 01:00:45.546973   58823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 01:00:45.554439   58823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 01:00:45.562216   58823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 01:00:45.570082   58823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 01:00:45.578073   58823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 01:00:45.586056   58823 kubeadm.go:404] StartCluster: {Name:old-k8s-version-330042 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-330042 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 01:00:45.586202   58823 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 01:00:45.586270   58823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:00:45.632205   58823 cri.go:89] found id: ""
	I1101 01:00:45.632279   58823 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 01:00:45.646397   58823 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1101 01:00:45.646432   58823 kubeadm.go:636] restartCluster start
	I1101 01:00:45.646492   58823 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 01:00:45.660754   58823 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:45.662302   58823 kubeconfig.go:92] found "old-k8s-version-330042" server: "https://192.168.39.90:8443"
	I1101 01:00:45.665617   58823 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 01:00:45.679127   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:45.679203   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:45.697578   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:45.697601   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:45.697662   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:45.715086   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:46.215841   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:46.215939   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:46.233039   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:46.715162   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:46.715283   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:46.727101   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:47.215409   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:47.215512   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:47.228104   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:43.297105   58730 system_pods.go:59] 9 kube-system pods found
	I1101 01:00:43.452043   58730 system_pods.go:61] "coredns-5dd5756b68-9hvh7" [d7d126c2-c270-452c-b939-15303a174742] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 01:00:43.452062   58730 system_pods.go:61] "coredns-5dd5756b68-gptmc" [fbbb9f17-32d6-456d-8171-eadcf64b11a8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 01:00:43.452074   58730 system_pods.go:61] "etcd-embed-certs-754132" [3c7474c1-788e-461d-bd20-e62c3c12cf27] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 01:00:43.452086   58730 system_pods.go:61] "kube-apiserver-embed-certs-754132" [d218a8d6-536c-400a-b81e-325b89ab475b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 01:00:43.452116   58730 system_pods.go:61] "kube-controller-manager-embed-certs-754132" [930b7861-b807-4f24-ba3c-9365a1e8dd8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 01:00:43.452128   58730 system_pods.go:61] "kube-proxy-d5d5x" [c7a6d923-0b37-452b-9979-0a64c05ee737] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 01:00:43.452142   58730 system_pods.go:61] "kube-scheduler-embed-certs-754132" [fd9c0833-f9d4-41cf-b5dd-b676ea5da6ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 01:00:43.452156   58730 system_pods.go:61] "metrics-server-57f55c9bc5-znchz" [60da0fbf-a2c4-4910-b06b-251b33b8ad0b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:00:43.452169   58730 system_pods.go:61] "storage-provisioner" [fbece4fb-6f83-4f17-acb8-94f493dd72e9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 01:00:43.452185   58730 system_pods.go:74] duration metric: took 1.181683794s to wait for pod list to return data ...
	I1101 01:00:43.452198   58730 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:00:44.181694   58730 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:00:44.181739   58730 node_conditions.go:123] node cpu capacity is 2
	I1101 01:00:44.181756   58730 node_conditions.go:105] duration metric: took 729.549671ms to run NodePressure ...
	I1101 01:00:44.181784   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:45.274729   58730 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.092921592s)
	I1101 01:00:45.274761   58730 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1101 01:00:45.285444   58730 kubeadm.go:787] kubelet initialised
	I1101 01:00:45.285478   58730 kubeadm.go:788] duration metric: took 10.704919ms waiting for restarted kubelet to initialise ...
	I1101 01:00:45.285489   58730 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:00:45.303122   58730 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-9hvh7" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:47.333376   58730 pod_ready.go:92] pod "coredns-5dd5756b68-9hvh7" in "kube-system" namespace has status "Ready":"True"
	I1101 01:00:47.333404   58730 pod_ready.go:81] duration metric: took 2.030252648s waiting for pod "coredns-5dd5756b68-9hvh7" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:47.333415   58730 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-gptmc" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:47.339165   58730 pod_ready.go:92] pod "coredns-5dd5756b68-gptmc" in "kube-system" namespace has status "Ready":"True"
	I1101 01:00:47.339189   58730 pod_ready.go:81] duration metric: took 5.76803ms waiting for pod "coredns-5dd5756b68-gptmc" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:47.339201   58730 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:47.656259   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:47.656733   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:47.656767   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:47.656688   59967 retry.go:31] will retry after 3.546373187s: waiting for machine to come up
	I1101 01:00:47.716219   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:47.716302   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:47.729221   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:48.215453   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:48.215562   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:48.230259   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:48.715905   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:48.716035   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:48.729001   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:49.216123   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:49.216217   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:49.232128   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:49.715640   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:49.715708   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:49.729098   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:50.215271   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:50.215379   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:50.228075   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:50.715151   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:50.715256   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:50.726839   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:51.215204   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:51.215293   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:51.227412   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:51.715753   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:51.715870   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:51.728794   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:52.215318   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:52.215437   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:52.227527   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:48.860188   58730 pod_ready.go:92] pod "etcd-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:00:48.860215   58730 pod_ready.go:81] duration metric: took 1.521005544s waiting for pod "etcd-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:48.860228   58730 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:50.286848   58730 pod_ready.go:92] pod "kube-apiserver-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:00:50.286882   58730 pod_ready.go:81] duration metric: took 1.426640629s waiting for pod "kube-apiserver-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:50.286894   58730 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:51.886531   58730 pod_ready.go:92] pod "kube-controller-manager-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:00:51.886555   58730 pod_ready.go:81] duration metric: took 1.599653882s waiting for pod "kube-controller-manager-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:51.886565   58730 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-d5d5x" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:52.079723   58730 pod_ready.go:92] pod "kube-proxy-d5d5x" in "kube-system" namespace has status "Ready":"True"
	I1101 01:00:52.079752   58730 pod_ready.go:81] duration metric: took 193.181169ms waiting for pod "kube-proxy-d5d5x" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:52.079766   58730 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:51.204423   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:51.204909   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:51.204945   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:51.204854   59967 retry.go:31] will retry after 3.382936792s: waiting for machine to come up
	I1101 01:00:54.588976   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.589398   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Found IP for machine: 192.168.72.97
	I1101 01:00:54.589427   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Reserving static IP address...
	I1101 01:00:54.589447   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has current primary IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.589764   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Reserved static IP address: 192.168.72.97
	I1101 01:00:54.589783   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for SSH to be available...
	I1101 01:00:54.589811   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-639310", mac: "52:54:00:83:e0:44", ip: "192.168.72.97"} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:54.589841   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | skip adding static IP to network mk-default-k8s-diff-port-639310 - found existing host DHCP lease matching {name: "default-k8s-diff-port-639310", mac: "52:54:00:83:e0:44", ip: "192.168.72.97"}
	I1101 01:00:54.589858   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | Getting to WaitForSSH function...
	I1101 01:00:54.591920   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.592295   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:54.592327   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.592518   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | Using SSH client type: external
	I1101 01:00:54.592546   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa (-rw-------)
	I1101 01:00:54.592568   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.97 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1101 01:00:54.592581   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | About to run SSH command:
	I1101 01:00:54.592605   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | exit 0
	I1101 01:00:54.687664   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | SSH cmd err, output: <nil>: 
	I1101 01:00:54.688005   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetConfigRaw
	I1101 01:00:54.688653   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetIP
	I1101 01:00:54.691258   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.691761   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:54.691803   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.692096   59148 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/config.json ...
	I1101 01:00:54.692278   59148 machine.go:88] provisioning docker machine ...
	I1101 01:00:54.692297   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:00:54.692554   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetMachineName
	I1101 01:00:54.692765   59148 buildroot.go:166] provisioning hostname "default-k8s-diff-port-639310"
	I1101 01:00:54.692787   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetMachineName
	I1101 01:00:54.692962   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:54.695491   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.695887   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:54.695917   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.696074   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:54.696280   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:54.696477   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:54.696624   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:54.696817   59148 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:54.697275   59148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.97 22 <nil> <nil>}
	I1101 01:00:54.697298   59148 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-639310 && echo "default-k8s-diff-port-639310" | sudo tee /etc/hostname
	I1101 01:00:54.836084   59148 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-639310
	
	I1101 01:00:54.836118   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:54.839109   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.839437   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:54.839463   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.839732   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:54.839986   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:54.840131   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:54.840290   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:54.840501   59148 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:54.840865   59148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.97 22 <nil> <nil>}
	I1101 01:00:54.840885   59148 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-639310' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-639310/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-639310' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 01:00:54.979804   59148 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 01:00:54.979841   59148 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7305/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7305/.minikube}
	I1101 01:00:54.979870   59148 buildroot.go:174] setting up certificates
	I1101 01:00:54.979881   59148 provision.go:83] configureAuth start
	I1101 01:00:54.979898   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetMachineName
	I1101 01:00:54.980246   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetIP
	I1101 01:00:54.983397   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.983760   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:54.983794   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.984029   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:54.986746   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.987112   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:54.987160   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.987328   59148 provision.go:138] copyHostCerts
	I1101 01:00:54.987418   59148 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem, removing ...
	I1101 01:00:54.987437   59148 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 01:00:54.987507   59148 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem (1078 bytes)
	I1101 01:00:54.987619   59148 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem, removing ...
	I1101 01:00:54.987628   59148 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 01:00:54.987651   59148 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem (1123 bytes)
	I1101 01:00:54.987707   59148 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem, removing ...
	I1101 01:00:54.987714   59148 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 01:00:54.987731   59148 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem (1675 bytes)
	I1101 01:00:54.987773   59148 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-639310 san=[192.168.72.97 192.168.72.97 localhost 127.0.0.1 minikube default-k8s-diff-port-639310]
	I1101 01:00:56.081549   58676 start.go:369] acquired machines lock for "no-preload-008483" in 57.600332472s
	I1101 01:00:56.081600   58676 start.go:96] Skipping create...Using existing machine configuration
	I1101 01:00:56.081611   58676 fix.go:54] fixHost starting: 
	I1101 01:00:56.082003   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:00:56.082041   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:00:56.098896   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33091
	I1101 01:00:56.099300   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:00:56.099786   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:00:56.099817   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:00:56.100159   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:00:56.100364   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:00:56.100511   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetState
	I1101 01:00:56.104041   58676 fix.go:102] recreateIfNeeded on no-preload-008483: state=Stopped err=<nil>
	I1101 01:00:56.104071   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	W1101 01:00:56.104250   58676 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 01:00:56.106287   58676 out.go:177] * Restarting existing kvm2 VM for "no-preload-008483" ...
	I1101 01:00:52.715585   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:52.715665   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:52.726877   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:53.216119   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:53.216202   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:53.228700   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:53.715253   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:53.715342   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:53.729029   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:54.215451   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:54.215554   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:54.228217   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:54.715451   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:54.715513   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:54.727356   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:55.216034   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:55.216130   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:55.227905   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:55.680067   58823 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1101 01:00:55.680120   58823 kubeadm.go:1128] stopping kube-system containers ...
	I1101 01:00:55.680135   58823 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 01:00:55.680204   58823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:00:55.726658   58823 cri.go:89] found id: ""
	I1101 01:00:55.726744   58823 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 01:00:55.748477   58823 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:00:55.758933   58823 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:00:55.759013   58823 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:00:55.769130   58823 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1101 01:00:55.769156   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:55.911136   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:57.164062   58823 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.252874473s)
	I1101 01:00:57.164095   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:57.403267   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:55.270327   59148 provision.go:172] copyRemoteCerts
	I1101 01:00:55.270394   59148 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 01:00:55.270418   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:55.272988   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.273410   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:55.273444   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.273609   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:55.273818   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:55.273966   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:55.274113   59148 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa Username:docker}
	I1101 01:00:55.367354   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 01:00:55.391069   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1101 01:00:55.413001   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 01:00:55.436904   59148 provision.go:86] duration metric: configureAuth took 457.006108ms
	I1101 01:00:55.436930   59148 buildroot.go:189] setting minikube options for container-runtime
	I1101 01:00:55.437115   59148 config.go:182] Loaded profile config "default-k8s-diff-port-639310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:00:55.437187   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:55.440286   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.440627   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:55.440662   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.440789   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:55.440989   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:55.441187   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:55.441330   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:55.441491   59148 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:55.441905   59148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.97 22 <nil> <nil>}
	I1101 01:00:55.441928   59148 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 01:00:55.788340   59148 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 01:00:55.788372   59148 machine.go:91] provisioned docker machine in 1.096081387s
	I1101 01:00:55.788386   59148 start.go:300] post-start starting for "default-k8s-diff-port-639310" (driver="kvm2")
	I1101 01:00:55.788401   59148 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 01:00:55.788443   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:00:55.788777   59148 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 01:00:55.788846   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:55.792110   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.792594   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:55.792626   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.792829   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:55.793080   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:55.793273   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:55.793421   59148 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa Username:docker}
	I1101 01:00:55.893108   59148 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 01:00:55.898425   59148 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 01:00:55.898452   59148 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/addons for local assets ...
	I1101 01:00:55.898530   59148 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/files for local assets ...
	I1101 01:00:55.898619   59148 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> 145042.pem in /etc/ssl/certs
	I1101 01:00:55.898751   59148 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 01:00:55.909396   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:00:55.943412   59148 start.go:303] post-start completed in 154.998365ms
	I1101 01:00:55.943440   59148 fix.go:56] fixHost completed within 20.309363198s
	I1101 01:00:55.943464   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:55.946417   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.946777   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:55.946810   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.947048   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:55.947268   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:55.947484   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:55.947662   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:55.947849   59148 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:55.948212   59148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.97 22 <nil> <nil>}
	I1101 01:00:55.948225   59148 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1101 01:00:56.081387   59148 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698800456.033536949
	
	I1101 01:00:56.081411   59148 fix.go:206] guest clock: 1698800456.033536949
	I1101 01:00:56.081422   59148 fix.go:219] Guest: 2023-11-01 01:00:56.033536949 +0000 UTC Remote: 2023-11-01 01:00:55.943445038 +0000 UTC m=+270.963710441 (delta=90.091911ms)
	I1101 01:00:56.081446   59148 fix.go:190] guest clock delta is within tolerance: 90.091911ms
	I1101 01:00:56.081451   59148 start.go:83] releasing machines lock for "default-k8s-diff-port-639310", held for 20.447404197s
	I1101 01:00:56.081484   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:00:56.081826   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetIP
	I1101 01:00:56.084827   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:56.085289   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:56.085326   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:56.085543   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:00:56.086049   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:00:56.086272   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:00:56.086374   59148 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 01:00:56.086425   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:56.086677   59148 ssh_runner.go:195] Run: cat /version.json
	I1101 01:00:56.086709   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:56.089377   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:56.089696   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:56.089784   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:56.089841   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:56.090077   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:56.090088   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:56.090108   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:56.090256   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:56.090329   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:56.090405   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:56.090479   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:56.090557   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:56.090613   59148 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa Username:docker}
	I1101 01:00:56.090681   59148 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa Username:docker}
	I1101 01:00:56.220669   59148 ssh_runner.go:195] Run: systemctl --version
	I1101 01:00:56.226971   59148 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 01:00:56.375845   59148 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 01:00:56.383893   59148 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 01:00:56.383986   59148 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 01:00:56.404009   59148 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 01:00:56.404035   59148 start.go:472] detecting cgroup driver to use...
	I1101 01:00:56.404107   59148 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 01:00:56.420015   59148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 01:00:56.435577   59148 docker.go:204] disabling cri-docker service (if available) ...
	I1101 01:00:56.435652   59148 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 01:00:56.448542   59148 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 01:00:56.465197   59148 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 01:00:56.607142   59148 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 01:00:56.739287   59148 docker.go:220] disabling docker service ...
	I1101 01:00:56.739366   59148 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 01:00:56.753861   59148 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 01:00:56.768891   59148 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 01:00:56.893929   59148 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 01:00:57.022891   59148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 01:00:57.039063   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 01:00:57.058893   59148 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 01:00:57.058964   59148 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:57.070769   59148 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 01:00:57.070845   59148 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:57.082528   59148 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:57.094350   59148 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:57.105953   59148 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 01:00:57.117745   59148 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 01:00:57.128493   59148 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 01:00:57.128553   59148 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 01:00:57.145858   59148 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 01:00:57.157318   59148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 01:00:57.288371   59148 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 01:00:57.489356   59148 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 01:00:57.489458   59148 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 01:00:57.495837   59148 start.go:540] Will wait 60s for crictl version
	I1101 01:00:57.495907   59148 ssh_runner.go:195] Run: which crictl
	I1101 01:00:57.500572   59148 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 01:00:57.546076   59148 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1101 01:00:57.546245   59148 ssh_runner.go:195] Run: crio --version
	I1101 01:00:57.601745   59148 ssh_runner.go:195] Run: crio --version
	I1101 01:00:57.664097   59148 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1101 01:00:54.387561   58730 pod_ready.go:102] pod "kube-scheduler-embed-certs-754132" in "kube-system" namespace has status "Ready":"False"
	I1101 01:00:56.388062   58730 pod_ready.go:92] pod "kube-scheduler-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:00:56.388085   58730 pod_ready.go:81] duration metric: took 4.308312567s waiting for pod "kube-scheduler-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:56.388094   58730 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:57.666096   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetIP
	I1101 01:00:57.670028   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:57.670437   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:57.670472   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:57.670760   59148 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1101 01:00:57.675850   59148 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:00:57.689379   59148 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 01:00:57.689439   59148 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:00:57.736333   59148 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1101 01:00:57.736404   59148 ssh_runner.go:195] Run: which lz4
	I1101 01:00:57.740532   59148 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1101 01:00:57.745488   59148 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 01:00:57.745535   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1101 01:00:59.649981   59148 crio.go:444] Took 1.909486 seconds to copy over tarball
	I1101 01:00:59.650070   59148 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 01:00:56.107642   58676 main.go:141] libmachine: (no-preload-008483) Calling .Start
	I1101 01:00:56.107815   58676 main.go:141] libmachine: (no-preload-008483) Ensuring networks are active...
	I1101 01:00:56.108696   58676 main.go:141] libmachine: (no-preload-008483) Ensuring network default is active
	I1101 01:00:56.109190   58676 main.go:141] libmachine: (no-preload-008483) Ensuring network mk-no-preload-008483 is active
	I1101 01:00:56.109623   58676 main.go:141] libmachine: (no-preload-008483) Getting domain xml...
	I1101 01:00:56.110400   58676 main.go:141] libmachine: (no-preload-008483) Creating domain...
	I1101 01:00:57.626479   58676 main.go:141] libmachine: (no-preload-008483) Waiting to get IP...
	I1101 01:00:57.627653   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:00:57.628245   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:00:57.628315   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:00:57.628210   60142 retry.go:31] will retry after 306.868541ms: waiting for machine to come up
	I1101 01:00:57.936854   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:00:57.937358   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:00:57.937392   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:00:57.937309   60142 retry.go:31] will retry after 366.94808ms: waiting for machine to come up
	I1101 01:00:58.306219   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:00:58.306880   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:00:58.306909   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:00:58.306815   60142 retry.go:31] will retry after 470.784378ms: waiting for machine to come up
	I1101 01:00:58.781164   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:00:58.781784   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:00:58.781810   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:00:58.781686   60142 retry.go:31] will retry after 475.883045ms: waiting for machine to come up
	I1101 01:00:59.259400   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:00:59.259922   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:00:59.259964   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:00:59.259816   60142 retry.go:31] will retry after 533.372113ms: waiting for machine to come up
	I1101 01:00:59.794619   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:00:59.795307   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:00:59.795335   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:00:59.795222   60142 retry.go:31] will retry after 643.335947ms: waiting for machine to come up
	I1101 01:01:00.440339   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:00.440876   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:01:00.440901   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:01:00.440795   60142 retry.go:31] will retry after 899.488876ms: waiting for machine to come up
	I1101 01:00:57.545316   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:57.641733   58823 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:00:57.641812   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:57.655826   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:58.173767   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:58.674113   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:59.174394   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:59.674240   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:59.705758   58823 api_server.go:72] duration metric: took 2.064024888s to wait for apiserver process to appear ...
	I1101 01:00:59.705791   58823 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:00:59.705814   58823 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I1101 01:00:58.517913   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:00.993028   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:03.059373   59148 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.409271602s)
	I1101 01:01:03.059403   59148 crio.go:451] Took 3.409395 seconds to extract the tarball
	I1101 01:01:03.059413   59148 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 01:01:03.101818   59148 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:01:03.153263   59148 crio.go:496] all images are preloaded for cri-o runtime.
	I1101 01:01:03.153284   59148 cache_images.go:84] Images are preloaded, skipping loading
	I1101 01:01:03.153341   59148 ssh_runner.go:195] Run: crio config
	I1101 01:01:03.228205   59148 cni.go:84] Creating CNI manager for ""
	I1101 01:01:03.228225   59148 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:01:03.228241   59148 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 01:01:03.228265   59148 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.97 APIServerPort:8444 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-639310 NodeName:default-k8s-diff-port-639310 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 01:01:03.228386   59148 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.97
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-639310"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 01:01:03.228463   59148 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-639310 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-639310 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1101 01:01:03.228517   59148 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 01:01:03.240926   59148 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 01:01:03.241014   59148 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 01:01:03.253440   59148 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I1101 01:01:03.271480   59148 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 01:01:03.292784   59148 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I1101 01:01:03.315295   59148 ssh_runner.go:195] Run: grep 192.168.72.97	control-plane.minikube.internal$ /etc/hosts
	I1101 01:01:03.319922   59148 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:01:03.332820   59148 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310 for IP: 192.168.72.97
	I1101 01:01:03.332869   59148 certs.go:190] acquiring lock for shared ca certs: {Name:mk6185b3938c4b51e80f57b7c81926c81b632b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:01:03.333015   59148 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key
	I1101 01:01:03.333067   59148 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key
	I1101 01:01:03.333174   59148 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/client.key
	I1101 01:01:03.333255   59148 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/apiserver.key.6d6df538
	I1101 01:01:03.333307   59148 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/proxy-client.key
	I1101 01:01:03.333469   59148 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem (1338 bytes)
	W1101 01:01:03.333531   59148 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504_empty.pem, impossibly tiny 0 bytes
	I1101 01:01:03.333542   59148 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 01:01:03.333580   59148 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem (1078 bytes)
	I1101 01:01:03.333632   59148 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem (1123 bytes)
	I1101 01:01:03.333699   59148 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem (1675 bytes)
	I1101 01:01:03.333761   59148 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:01:03.334633   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 01:01:03.361740   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 01:01:03.387535   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 01:01:03.414252   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 01:01:03.438492   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 01:01:03.463501   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 01:01:03.489800   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 01:01:03.517317   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 01:01:03.543330   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem --> /usr/share/ca-certificates/14504.pem (1338 bytes)
	I1101 01:01:03.567744   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /usr/share/ca-certificates/145042.pem (1708 bytes)
	I1101 01:01:03.594230   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 01:01:03.620857   59148 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 01:01:03.638676   59148 ssh_runner.go:195] Run: openssl version
	I1101 01:01:03.644139   59148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14504.pem && ln -fs /usr/share/ca-certificates/14504.pem /etc/ssl/certs/14504.pem"
	I1101 01:01:03.654667   59148 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14504.pem
	I1101 01:01:03.659261   59148 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:54 /usr/share/ca-certificates/14504.pem
	I1101 01:01:03.659322   59148 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14504.pem
	I1101 01:01:03.664592   59148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14504.pem /etc/ssl/certs/51391683.0"
	I1101 01:01:03.675482   59148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145042.pem && ln -fs /usr/share/ca-certificates/145042.pem /etc/ssl/certs/145042.pem"
	I1101 01:01:03.687903   59148 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145042.pem
	I1101 01:01:03.692901   59148 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:54 /usr/share/ca-certificates/145042.pem
	I1101 01:01:03.692970   59148 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145042.pem
	I1101 01:01:03.698691   59148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145042.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 01:01:03.709971   59148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 01:01:03.720612   59148 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:01:03.725306   59148 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:01:03.725397   59148 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:01:03.731004   59148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 01:01:03.743558   59148 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 01:01:03.748428   59148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 01:01:03.754404   59148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 01:01:03.760210   59148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 01:01:03.765964   59148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 01:01:03.771813   59148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 01:01:03.777659   59148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 01:01:03.783754   59148 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-639310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.3 ClusterName:default-k8s-diff-port-639310 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.97 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 01:01:03.783846   59148 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 01:01:03.783903   59148 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:01:03.823390   59148 cri.go:89] found id: ""
	I1101 01:01:03.823473   59148 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 01:01:03.835317   59148 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1101 01:01:03.835339   59148 kubeadm.go:636] restartCluster start
	I1101 01:01:03.835393   59148 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 01:01:03.845532   59148 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:03.846629   59148 kubeconfig.go:92] found "default-k8s-diff-port-639310" server: "https://192.168.72.97:8444"
	I1101 01:01:03.849176   59148 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 01:01:03.859318   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:03.859387   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:03.871598   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:03.871620   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:03.871682   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:03.882903   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:04.383593   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:04.383684   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:04.398424   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:04.883982   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:04.884095   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:04.901344   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:01.341708   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:01.342186   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:01:01.342216   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:01:01.342138   60142 retry.go:31] will retry after 1.416825478s: waiting for machine to come up
	I1101 01:01:02.760851   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:02.761364   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:01:02.761391   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:01:02.761319   60142 retry.go:31] will retry after 1.783291063s: waiting for machine to come up
	I1101 01:01:04.546179   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:04.546731   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:01:04.546768   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:01:04.546684   60142 retry.go:31] will retry after 1.94150512s: waiting for machine to come up
	I1101 01:01:04.706156   58823 api_server.go:269] stopped: https://192.168.39.90:8443/healthz: Get "https://192.168.39.90:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 01:01:04.706226   58823 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I1101 01:01:05.474195   58823 api_server.go:279] https://192.168.39.90:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 01:01:05.474233   58823 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 01:01:05.975031   58823 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I1101 01:01:05.981753   58823 api_server.go:279] https://192.168.39.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1101 01:01:05.981796   58823 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1101 01:01:06.474331   58823 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I1101 01:01:06.483910   58823 api_server.go:279] https://192.168.39.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1101 01:01:06.483971   58823 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1101 01:01:06.974478   58823 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I1101 01:01:06.983225   58823 api_server.go:279] https://192.168.39.90:8443/healthz returned 200:
	ok
	I1101 01:01:06.992078   58823 api_server.go:141] control plane version: v1.16.0
	I1101 01:01:06.992104   58823 api_server.go:131] duration metric: took 7.286307099s to wait for apiserver health ...
	I1101 01:01:06.992112   58823 cni.go:84] Creating CNI manager for ""
	I1101 01:01:06.992118   58823 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:01:06.994180   58823 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:01:06.995961   58823 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:01:07.007478   58823 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:01:07.025029   58823 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:01:07.036645   58823 system_pods.go:59] 7 kube-system pods found
	I1101 01:01:07.036685   58823 system_pods.go:61] "coredns-5644d7b6d9-swhtm" [5c5eacff-9271-46c5-add0-a3931b67876b] Running
	I1101 01:01:07.036692   58823 system_pods.go:61] "etcd-old-k8s-version-330042" [0b703394-0d1c-419d-8e08-c2c299046293] Running
	I1101 01:01:07.036699   58823 system_pods.go:61] "kube-apiserver-old-k8s-version-330042" [0dcb0028-fa22-4107-afa1-fbdd14b615ab] Running
	I1101 01:01:07.036706   58823 system_pods.go:61] "kube-controller-manager-old-k8s-version-330042" [adc1372e-45e1-4365-a039-c06af715cb24] Running
	I1101 01:01:07.036712   58823 system_pods.go:61] "kube-proxy-h86m8" [6db2c8ff-26f9-4f22-9cbd-2405a81d9128] Running
	I1101 01:01:07.036718   58823 system_pods.go:61] "kube-scheduler-old-k8s-version-330042" [f3f78aa9-fcb1-4b87-b7fa-f86c44e801c0] Running
	I1101 01:01:07.036724   58823 system_pods.go:61] "storage-provisioner" [710e45b8-dab7-4bbc-9ce8-f607db4cb63e] Running
	I1101 01:01:07.036733   58823 system_pods.go:74] duration metric: took 11.681153ms to wait for pod list to return data ...
	I1101 01:01:07.036745   58823 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:01:07.043383   58823 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:01:07.043420   58823 node_conditions.go:123] node cpu capacity is 2
	I1101 01:01:07.043433   58823 node_conditions.go:105] duration metric: took 6.681589ms to run NodePressure ...
	I1101 01:01:07.043454   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:07.419893   58823 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1101 01:01:07.425342   58823 retry.go:31] will retry after 365.112122ms: kubelet not initialised
	I1101 01:01:03.491770   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:05.989935   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:05.383225   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:05.383308   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:05.399889   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:05.884036   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:05.884134   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:05.899867   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:06.383118   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:06.383241   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:06.399285   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:06.883379   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:06.883497   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:06.895160   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:07.383835   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:07.383951   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:07.401766   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:07.883254   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:07.883368   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:07.900024   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:08.383405   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:08.383494   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:08.401659   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:08.883099   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:08.883189   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:08.898348   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:09.383858   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:09.384003   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:09.396380   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:09.884003   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:09.884128   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:09.901031   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:06.489565   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:06.490176   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:01:06.490200   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:01:06.490117   60142 retry.go:31] will retry after 2.694877407s: waiting for machine to come up
	I1101 01:01:09.186086   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:09.186554   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:01:09.186584   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:01:09.186497   60142 retry.go:31] will retry after 2.651563817s: waiting for machine to come up
	I1101 01:01:07.799240   58823 retry.go:31] will retry after 519.025086ms: kubelet not initialised
	I1101 01:01:08.325024   58823 retry.go:31] will retry after 345.44325ms: kubelet not initialised
	I1101 01:01:08.674686   58823 retry.go:31] will retry after 665.113314ms: kubelet not initialised
	I1101 01:01:09.345867   58823 retry.go:31] will retry after 1.421023017s: kubelet not initialised
	I1101 01:01:10.773100   58823 retry.go:31] will retry after 1.15707988s: kubelet not initialised
	I1101 01:01:11.936215   58823 retry.go:31] will retry after 3.290674523s: kubelet not initialised
	I1101 01:01:08.490229   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:10.990967   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:12.991285   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:10.383739   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:10.383800   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:10.398972   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:10.882991   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:10.883089   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:10.897346   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:11.383976   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:11.384059   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:11.396332   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:11.883903   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:11.884020   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:11.897279   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:12.383675   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:12.383786   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:12.399623   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:12.883112   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:12.883191   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:12.895484   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:13.383069   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:13.383181   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:13.395417   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:13.860229   59148 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1101 01:01:13.860262   59148 kubeadm.go:1128] stopping kube-system containers ...
	I1101 01:01:13.860277   59148 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 01:01:13.860360   59148 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:01:13.901712   59148 cri.go:89] found id: ""
	I1101 01:01:13.901809   59148 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 01:01:13.918956   59148 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:01:13.931401   59148 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:01:13.931477   59148 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:01:13.943486   59148 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1101 01:01:13.943512   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:14.077324   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:11.839684   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:11.840140   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:01:11.840169   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:01:11.840105   60142 retry.go:31] will retry after 4.157820096s: waiting for machine to come up
	I1101 01:01:15.233157   58823 retry.go:31] will retry after 3.531336164s: kubelet not initialised
	I1101 01:01:15.490358   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:17.491953   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:16.001208   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.001765   58676 main.go:141] libmachine: (no-preload-008483) Found IP for machine: 192.168.50.140
	I1101 01:01:16.001790   58676 main.go:141] libmachine: (no-preload-008483) Reserving static IP address...
	I1101 01:01:16.001806   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has current primary IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.002298   58676 main.go:141] libmachine: (no-preload-008483) Reserved static IP address: 192.168.50.140
	I1101 01:01:16.002338   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "no-preload-008483", mac: "52:54:00:6c:aa:b5", ip: "192.168.50.140"} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.002357   58676 main.go:141] libmachine: (no-preload-008483) Waiting for SSH to be available...
	I1101 01:01:16.002381   58676 main.go:141] libmachine: (no-preload-008483) DBG | skip adding static IP to network mk-no-preload-008483 - found existing host DHCP lease matching {name: "no-preload-008483", mac: "52:54:00:6c:aa:b5", ip: "192.168.50.140"}
	I1101 01:01:16.002397   58676 main.go:141] libmachine: (no-preload-008483) DBG | Getting to WaitForSSH function...
	I1101 01:01:16.004912   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.005349   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.005387   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.005528   58676 main.go:141] libmachine: (no-preload-008483) DBG | Using SSH client type: external
	I1101 01:01:16.005550   58676 main.go:141] libmachine: (no-preload-008483) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa (-rw-------)
	I1101 01:01:16.005589   58676 main.go:141] libmachine: (no-preload-008483) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.140 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1101 01:01:16.005607   58676 main.go:141] libmachine: (no-preload-008483) DBG | About to run SSH command:
	I1101 01:01:16.005621   58676 main.go:141] libmachine: (no-preload-008483) DBG | exit 0
	I1101 01:01:16.100131   58676 main.go:141] libmachine: (no-preload-008483) DBG | SSH cmd err, output: <nil>: 
	I1101 01:01:16.100576   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetConfigRaw
	I1101 01:01:16.101304   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetIP
	I1101 01:01:16.104212   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.104482   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.104528   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.104710   58676 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/config.json ...
	I1101 01:01:16.104933   58676 machine.go:88] provisioning docker machine ...
	I1101 01:01:16.104951   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:01:16.105159   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetMachineName
	I1101 01:01:16.105351   58676 buildroot.go:166] provisioning hostname "no-preload-008483"
	I1101 01:01:16.105375   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetMachineName
	I1101 01:01:16.105551   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:16.107936   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.108287   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.108333   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.108422   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:16.108594   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:16.108734   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:16.108861   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:16.109041   58676 main.go:141] libmachine: Using SSH client type: native
	I1101 01:01:16.109531   58676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I1101 01:01:16.109557   58676 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-008483 && echo "no-preload-008483" | sudo tee /etc/hostname
	I1101 01:01:16.249893   58676 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-008483
	
	I1101 01:01:16.249924   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:16.253130   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.253531   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.253571   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.253879   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:16.254106   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:16.254304   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:16.254441   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:16.254608   58676 main.go:141] libmachine: Using SSH client type: native
	I1101 01:01:16.254965   58676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I1101 01:01:16.254987   58676 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-008483' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-008483/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-008483' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 01:01:16.386797   58676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 01:01:16.386834   58676 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7305/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7305/.minikube}
	I1101 01:01:16.386862   58676 buildroot.go:174] setting up certificates
	I1101 01:01:16.386870   58676 provision.go:83] configureAuth start
	I1101 01:01:16.386879   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetMachineName
	I1101 01:01:16.387149   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetIP
	I1101 01:01:16.390409   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.390812   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.390844   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.391055   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:16.393580   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.394122   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.394154   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.394352   58676 provision.go:138] copyHostCerts
	I1101 01:01:16.394425   58676 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem, removing ...
	I1101 01:01:16.394438   58676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 01:01:16.394506   58676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem (1078 bytes)
	I1101 01:01:16.394646   58676 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem, removing ...
	I1101 01:01:16.394658   58676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 01:01:16.394690   58676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem (1123 bytes)
	I1101 01:01:16.394774   58676 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem, removing ...
	I1101 01:01:16.394786   58676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 01:01:16.394811   58676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem (1675 bytes)
	I1101 01:01:16.394874   58676 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem org=jenkins.no-preload-008483 san=[192.168.50.140 192.168.50.140 localhost 127.0.0.1 minikube no-preload-008483]
	I1101 01:01:16.593958   58676 provision.go:172] copyRemoteCerts
	I1101 01:01:16.594024   58676 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 01:01:16.594046   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:16.597073   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.597449   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.597484   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.597723   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:16.597956   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:16.598108   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:16.598247   58676 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa Username:docker}
	I1101 01:01:16.689574   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 01:01:16.714820   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1101 01:01:16.744383   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 01:01:16.769305   58676 provision.go:86] duration metric: configureAuth took 382.416455ms
	I1101 01:01:16.769338   58676 buildroot.go:189] setting minikube options for container-runtime
	I1101 01:01:16.769596   58676 config.go:182] Loaded profile config "no-preload-008483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:01:16.769692   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:16.773209   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.773565   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.773628   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.773828   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:16.774071   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:16.774353   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:16.774570   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:16.774772   58676 main.go:141] libmachine: Using SSH client type: native
	I1101 01:01:16.775132   58676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I1101 01:01:16.775150   58676 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 01:01:17.110397   58676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 01:01:17.110481   58676 machine.go:91] provisioned docker machine in 1.005532035s
	I1101 01:01:17.110500   58676 start.go:300] post-start starting for "no-preload-008483" (driver="kvm2")
	I1101 01:01:17.110513   58676 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 01:01:17.110559   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:01:17.110920   58676 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 01:01:17.110948   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:17.114342   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.114794   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:17.114829   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.115028   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:17.115227   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:17.115440   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:17.115621   58676 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa Username:docker}
	I1101 01:01:17.210514   58676 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 01:01:17.216393   58676 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 01:01:17.216423   58676 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/addons for local assets ...
	I1101 01:01:17.216522   58676 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/files for local assets ...
	I1101 01:01:17.216640   58676 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> 145042.pem in /etc/ssl/certs
	I1101 01:01:17.216773   58676 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 01:01:17.229604   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:01:17.255095   58676 start.go:303] post-start completed in 144.577436ms
	I1101 01:01:17.255120   58676 fix.go:56] fixHost completed within 21.173509578s
	I1101 01:01:17.255192   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:17.258433   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.258833   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:17.258858   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.259085   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:17.259305   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:17.259478   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:17.259628   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:17.259825   58676 main.go:141] libmachine: Using SSH client type: native
	I1101 01:01:17.260306   58676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I1101 01:01:17.260321   58676 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1101 01:01:17.389718   58676 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698800477.337229135
	
	I1101 01:01:17.389748   58676 fix.go:206] guest clock: 1698800477.337229135
	I1101 01:01:17.389770   58676 fix.go:219] Guest: 2023-11-01 01:01:17.337229135 +0000 UTC Remote: 2023-11-01 01:01:17.255124581 +0000 UTC m=+361.362725964 (delta=82.104554ms)
	I1101 01:01:17.389797   58676 fix.go:190] guest clock delta is within tolerance: 82.104554ms
	I1101 01:01:17.389804   58676 start.go:83] releasing machines lock for "no-preload-008483", held for 21.308227601s
	I1101 01:01:17.389828   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:01:17.390149   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetIP
	I1101 01:01:17.393289   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.393692   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:17.393723   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.393937   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:01:17.394589   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:01:17.394780   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:01:17.394877   58676 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 01:01:17.394918   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:17.395060   58676 ssh_runner.go:195] Run: cat /version.json
	I1101 01:01:17.395115   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:17.398497   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:17.398497   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.398581   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:17.398642   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.398665   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.398700   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:17.398853   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:17.398861   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:17.398881   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.398995   58676 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa Username:docker}
	I1101 01:01:17.399475   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:17.399644   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:17.399798   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:17.399976   58676 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa Username:docker}
	I1101 01:01:17.524462   58676 ssh_runner.go:195] Run: systemctl --version
	I1101 01:01:17.530328   58676 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 01:01:17.678956   58676 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 01:01:17.686754   58676 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 01:01:17.686834   58676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 01:01:17.705358   58676 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 01:01:17.705388   58676 start.go:472] detecting cgroup driver to use...
	I1101 01:01:17.705527   58676 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 01:01:17.722410   58676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 01:01:17.739380   58676 docker.go:204] disabling cri-docker service (if available) ...
	I1101 01:01:17.739443   58676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 01:01:17.755953   58676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 01:01:17.769672   58676 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 01:01:17.900801   58676 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 01:01:18.027283   58676 docker.go:220] disabling docker service ...
	I1101 01:01:18.027378   58676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 01:01:18.041230   58676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 01:01:18.052784   58676 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 01:01:18.165341   58676 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 01:01:18.276403   58676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 01:01:18.289618   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 01:01:18.308480   58676 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 01:01:18.308562   58676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:01:18.318597   58676 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 01:01:18.318673   58676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:01:18.328312   58676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:01:18.340054   58676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:01:18.351854   58676 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 01:01:18.364129   58676 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 01:01:18.372789   58676 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 01:01:18.372879   58676 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 01:01:18.385792   58676 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 01:01:18.394803   58676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 01:01:18.503941   58676 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 01:01:18.687034   58676 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 01:01:18.687137   58676 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 01:01:18.691750   58676 start.go:540] Will wait 60s for crictl version
	I1101 01:01:18.691818   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:18.695752   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 01:01:18.735012   58676 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1101 01:01:18.735098   58676 ssh_runner.go:195] Run: crio --version
	I1101 01:01:18.782835   58676 ssh_runner.go:195] Run: crio --version
	I1101 01:01:18.829727   58676 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1101 01:01:15.054547   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:15.248625   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:15.325492   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:15.396782   59148 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:01:15.396854   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:15.420220   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:15.941271   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:16.441997   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:16.942240   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:17.441850   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:17.941784   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:17.965191   59148 api_server.go:72] duration metric: took 2.5684081s to wait for apiserver process to appear ...
	I1101 01:01:17.965220   59148 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:01:17.965238   59148 api_server.go:253] Checking apiserver healthz at https://192.168.72.97:8444/healthz ...
	I1101 01:01:18.831303   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetIP
	I1101 01:01:18.834574   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:18.834969   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:18.835003   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:18.835233   58676 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1101 01:01:18.839259   58676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:01:18.853665   58676 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 01:01:18.853725   58676 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:01:18.890995   58676 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1101 01:01:18.891024   58676 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.3 registry.k8s.io/kube-controller-manager:v1.28.3 registry.k8s.io/kube-scheduler:v1.28.3 registry.k8s.io/kube-proxy:v1.28.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1101 01:01:18.891130   58676 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1101 01:01:18.891143   58676 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.3
	I1101 01:01:18.891144   58676 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1101 01:01:18.891201   58676 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1101 01:01:18.891263   58676 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.3
	I1101 01:01:18.891397   58676 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.3
	I1101 01:01:18.891415   58676 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1101 01:01:18.891134   58676 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:01:18.892729   58676 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.3
	I1101 01:01:18.892742   58676 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:01:18.892747   58676 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1101 01:01:18.892760   58676 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1101 01:01:18.892760   58676 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1101 01:01:18.892729   58676 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.3
	I1101 01:01:18.892790   58676 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.3
	I1101 01:01:18.892835   58676 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1101 01:01:19.112836   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1101 01:01:19.131170   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.3
	I1101 01:01:19.147328   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.3
	I1101 01:01:19.148513   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I1101 01:01:19.155909   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.3
	I1101 01:01:19.163598   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.3
	I1101 01:01:19.166436   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I1101 01:01:19.290823   58676 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.3" needs transfer: "registry.k8s.io/kube-proxy:v1.28.3" does not exist at hash "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf" in container runtime
	I1101 01:01:19.290888   58676 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.3
	I1101 01:01:19.290943   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:19.331622   58676 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.3" does not exist at hash "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076" in container runtime
	I1101 01:01:19.331709   58676 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.3" does not exist at hash "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4" in container runtime
	I1101 01:01:19.331776   58676 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.3
	I1101 01:01:19.331717   58676 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.3
	I1101 01:01:19.331872   58676 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.3" does not exist at hash "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3" in container runtime
	I1101 01:01:19.331899   58676 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1101 01:01:19.331905   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:19.331645   58676 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I1101 01:01:19.331979   58676 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I1101 01:01:19.331986   58676 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I1101 01:01:19.332011   58676 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I1101 01:01:19.332023   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:19.331945   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:19.332053   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:19.332040   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.3
	I1101 01:01:19.331842   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:19.342099   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.3
	I1101 01:01:19.396521   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I1101 01:01:19.396603   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.3
	I1101 01:01:19.396612   58676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3
	I1101 01:01:19.396628   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.3
	I1101 01:01:19.396681   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I1101 01:01:19.396700   58676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.3
	I1101 01:01:19.396750   58676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3
	I1101 01:01:19.396842   58676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1101 01:01:19.497732   58676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.3 (exists)
	I1101 01:01:19.497756   58676 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.3
	I1101 01:01:19.497784   58676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I1101 01:01:19.497813   58676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3
	I1101 01:01:19.497871   58676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I1101 01:01:19.497924   58676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3
	I1101 01:01:19.497964   58676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.3 (exists)
	I1101 01:01:19.498009   58676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1101 01:01:19.498015   58676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3
	I1101 01:01:19.498054   58676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I1101 01:01:19.498111   58676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I1101 01:01:19.498117   58676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1101 01:01:19.764214   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:01:18.769797   58823 retry.go:31] will retry after 5.956460089s: kubelet not initialised
	I1101 01:01:19.987384   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:21.989585   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:22.277798   59148 api_server.go:279] https://192.168.72.97:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 01:01:22.277829   59148 api_server.go:103] status: https://192.168.72.97:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 01:01:22.277839   59148 api_server.go:253] Checking apiserver healthz at https://192.168.72.97:8444/healthz ...
	I1101 01:01:22.371756   59148 api_server.go:279] https://192.168.72.97:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 01:01:22.371796   59148 api_server.go:103] status: https://192.168.72.97:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 01:01:22.872332   59148 api_server.go:253] Checking apiserver healthz at https://192.168.72.97:8444/healthz ...
	I1101 01:01:22.884543   59148 api_server.go:279] https://192.168.72.97:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:01:22.884587   59148 api_server.go:103] status: https://192.168.72.97:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:01:23.372033   59148 api_server.go:253] Checking apiserver healthz at https://192.168.72.97:8444/healthz ...
	I1101 01:01:23.381608   59148 api_server.go:279] https://192.168.72.97:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:01:23.381657   59148 api_server.go:103] status: https://192.168.72.97:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:01:23.872319   59148 api_server.go:253] Checking apiserver healthz at https://192.168.72.97:8444/healthz ...
	I1101 01:01:23.879515   59148 api_server.go:279] https://192.168.72.97:8444/healthz returned 200:
	ok
	I1101 01:01:23.892376   59148 api_server.go:141] control plane version: v1.28.3
	I1101 01:01:23.892412   59148 api_server.go:131] duration metric: took 5.927178892s to wait for apiserver health ...
	I1101 01:01:23.892424   59148 cni.go:84] Creating CNI manager for ""
	I1101 01:01:23.892433   59148 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:01:23.894577   59148 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:01:23.896163   59148 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:01:23.928482   59148 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:01:23.952485   59148 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:01:23.968054   59148 system_pods.go:59] 8 kube-system pods found
	I1101 01:01:23.968095   59148 system_pods.go:61] "coredns-5dd5756b68-lmxx8" [c74c5ddc-56a8-422c-a140-1fdd14ef817d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 01:01:23.968115   59148 system_pods.go:61] "etcd-default-k8s-diff-port-639310" [1baf2571-f6c6-43bc-8051-e72f7eb4ed70] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 01:01:23.968126   59148 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-639310" [9cbc66c6-7c66-4b24-9400-a5add2edec14] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 01:01:23.968145   59148 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-639310" [99945be6-6fb8-4da6-8c6a-c25a2023d2d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 01:01:23.968158   59148 system_pods.go:61] "kube-proxy-f45wg" [abe74c94-5140-4c35-a141-d995652948e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 01:01:23.968167   59148 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-639310" [299c1962-1945-4525-90c7-384d515dc4e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 01:01:23.968188   59148 system_pods.go:61] "metrics-server-57f55c9bc5-6szl7" [1e00ef03-d5f4-4e8b-a247-8c31a5492f9e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:01:23.968201   59148 system_pods.go:61] "storage-provisioner" [fe2e7631-0564-44d2-afbd-578fb37f6a04] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 01:01:23.968215   59148 system_pods.go:74] duration metric: took 15.694719ms to wait for pod list to return data ...
	I1101 01:01:23.968224   59148 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:01:23.972141   59148 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:01:23.972177   59148 node_conditions.go:123] node cpu capacity is 2
	I1101 01:01:23.972191   59148 node_conditions.go:105] duration metric: took 3.96106ms to run NodePressure ...
	I1101 01:01:23.972214   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:24.253558   59148 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1101 01:01:24.258842   59148 kubeadm.go:787] kubelet initialised
	I1101 01:01:24.258869   59148 kubeadm.go:788] duration metric: took 5.283339ms waiting for restarted kubelet to initialise ...
	I1101 01:01:24.258878   59148 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:01:24.265507   59148 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-lmxx8" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:24.271381   59148 pod_ready.go:97] node "default-k8s-diff-port-639310" hosting pod "coredns-5dd5756b68-lmxx8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.271408   59148 pod_ready.go:81] duration metric: took 5.876802ms waiting for pod "coredns-5dd5756b68-lmxx8" in "kube-system" namespace to be "Ready" ...
	E1101 01:01:24.271418   59148 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-639310" hosting pod "coredns-5dd5756b68-lmxx8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.271426   59148 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:24.277446   59148 pod_ready.go:97] node "default-k8s-diff-port-639310" hosting pod "etcd-default-k8s-diff-port-639310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.277476   59148 pod_ready.go:81] duration metric: took 6.04229ms waiting for pod "etcd-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	E1101 01:01:24.277487   59148 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-639310" hosting pod "etcd-default-k8s-diff-port-639310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.277495   59148 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:24.283557   59148 pod_ready.go:97] node "default-k8s-diff-port-639310" hosting pod "kube-apiserver-default-k8s-diff-port-639310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.283604   59148 pod_ready.go:81] duration metric: took 6.094277ms waiting for pod "kube-apiserver-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	E1101 01:01:24.283617   59148 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-639310" hosting pod "kube-apiserver-default-k8s-diff-port-639310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.283630   59148 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:24.357249   59148 pod_ready.go:97] node "default-k8s-diff-port-639310" hosting pod "kube-controller-manager-default-k8s-diff-port-639310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.357288   59148 pod_ready.go:81] duration metric: took 73.64295ms waiting for pod "kube-controller-manager-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	E1101 01:01:24.357302   59148 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-639310" hosting pod "kube-controller-manager-default-k8s-diff-port-639310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.357319   59148 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f45wg" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:21.457919   58676 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0: (1.960002941s)
	I1101 01:01:21.457955   58676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I1101 01:01:21.458111   58676 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.3: (1.960074529s)
	I1101 01:01:21.458140   58676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.3 (exists)
	I1101 01:01:21.458152   58676 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.3: (1.960014372s)
	I1101 01:01:21.458176   58676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.3 (exists)
	I1101 01:01:21.458226   58676 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1: (1.960094366s)
	I1101 01:01:21.458252   58676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I1101 01:01:21.458267   58676 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.694021872s)
	I1101 01:01:21.458306   58676 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1101 01:01:21.458344   58676 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:01:21.458392   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:21.458644   58676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3: (1.960815533s)
	I1101 01:01:21.458659   58676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3 from cache
	I1101 01:01:21.458686   58676 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1101 01:01:21.458718   58676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1101 01:01:21.462463   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:01:23.757842   58676 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.295346464s)
	I1101 01:01:23.757911   58676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1101 01:01:23.757849   58676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3: (2.299099605s)
	I1101 01:01:23.757965   58676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3 from cache
	I1101 01:01:23.758006   58676 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I1101 01:01:23.758025   58676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1101 01:01:23.758040   58676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I1101 01:01:23.764726   58676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1101 01:01:24.732471   58823 retry.go:31] will retry after 9.584941607s: kubelet not initialised
	I1101 01:01:23.990727   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:26.489463   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:25.156181   59148 pod_ready.go:92] pod "kube-proxy-f45wg" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:25.156211   59148 pod_ready.go:81] duration metric: took 798.883976ms waiting for pod "kube-proxy-f45wg" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:25.156225   59148 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:27.476794   59148 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-639310" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:29.974327   59148 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-639310" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:29.974364   59148 pod_ready.go:81] duration metric: took 4.818128166s waiting for pod "kube-scheduler-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:29.974381   59148 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:28.990433   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:30.991378   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:32.004594   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:34.006695   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:31.399348   58676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (7.641283444s)
	I1101 01:01:31.399378   58676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I1101 01:01:31.399412   58676 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1101 01:01:31.399465   58676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1101 01:01:33.857323   58676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3: (2.45781579s)
	I1101 01:01:33.857356   58676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3 from cache
	I1101 01:01:33.857384   58676 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1101 01:01:33.857444   58676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1101 01:01:34.322788   58823 retry.go:31] will retry after 7.673111332s: kubelet not initialised
	I1101 01:01:33.488934   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:35.489417   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:37.989455   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:36.506432   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:39.004133   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:36.328716   58676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3: (2.471243195s)
	I1101 01:01:36.328755   58676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3 from cache
	I1101 01:01:36.328788   58676 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I1101 01:01:36.328839   58676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I1101 01:01:37.691820   58676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.362944664s)
	I1101 01:01:37.691851   58676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I1101 01:01:37.691877   58676 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1101 01:01:37.691978   58676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1101 01:01:38.442125   58676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1101 01:01:38.442181   58676 cache_images.go:123] Successfully loaded all cached images
	I1101 01:01:38.442188   58676 cache_images.go:92] LoadImages completed in 19.55115042s
	I1101 01:01:38.442260   58676 ssh_runner.go:195] Run: crio config
	I1101 01:01:38.499778   58676 cni.go:84] Creating CNI manager for ""
	I1101 01:01:38.499799   58676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:01:38.499820   58676 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 01:01:38.499846   58676 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.140 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-008483 NodeName:no-preload-008483 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.140"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.140 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 01:01:38.500007   58676 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.140
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-008483"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.140
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.140"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 01:01:38.500076   58676 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-008483 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:no-preload-008483 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 01:01:38.500135   58676 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 01:01:38.510073   58676 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 01:01:38.510160   58676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 01:01:38.517853   58676 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1101 01:01:38.534085   58676 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 01:01:38.549312   58676 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I1101 01:01:38.566320   58676 ssh_runner.go:195] Run: grep 192.168.50.140	control-plane.minikube.internal$ /etc/hosts
	I1101 01:01:38.569923   58676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.140	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:01:38.582147   58676 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483 for IP: 192.168.50.140
	I1101 01:01:38.582180   58676 certs.go:190] acquiring lock for shared ca certs: {Name:mk6185b3938c4b51e80f57b7c81926c81b632b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:01:38.582353   58676 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key
	I1101 01:01:38.582412   58676 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key
	I1101 01:01:38.582512   58676 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/client.key
	I1101 01:01:38.582596   58676 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/apiserver.key.306fa7af
	I1101 01:01:38.582664   58676 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/proxy-client.key
	I1101 01:01:38.582841   58676 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem (1338 bytes)
	W1101 01:01:38.582887   58676 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504_empty.pem, impossibly tiny 0 bytes
	I1101 01:01:38.582903   58676 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 01:01:38.582941   58676 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem (1078 bytes)
	I1101 01:01:38.582978   58676 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem (1123 bytes)
	I1101 01:01:38.583015   58676 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem (1675 bytes)
	I1101 01:01:38.583082   58676 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:01:38.583827   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 01:01:38.607306   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 01:01:38.631666   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 01:01:38.655201   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 01:01:38.678237   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 01:01:38.700410   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 01:01:38.726807   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 01:01:38.752672   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 01:01:38.776285   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 01:01:38.799902   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem --> /usr/share/ca-certificates/14504.pem (1338 bytes)
	I1101 01:01:38.823790   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /usr/share/ca-certificates/145042.pem (1708 bytes)
	I1101 01:01:38.847407   58676 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 01:01:38.863594   58676 ssh_runner.go:195] Run: openssl version
	I1101 01:01:38.869214   58676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14504.pem && ln -fs /usr/share/ca-certificates/14504.pem /etc/ssl/certs/14504.pem"
	I1101 01:01:38.878725   58676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14504.pem
	I1101 01:01:38.883007   58676 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:54 /usr/share/ca-certificates/14504.pem
	I1101 01:01:38.883069   58676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14504.pem
	I1101 01:01:38.888251   58676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14504.pem /etc/ssl/certs/51391683.0"
	I1101 01:01:38.899894   58676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145042.pem && ln -fs /usr/share/ca-certificates/145042.pem /etc/ssl/certs/145042.pem"
	I1101 01:01:38.909658   58676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145042.pem
	I1101 01:01:38.914011   58676 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:54 /usr/share/ca-certificates/145042.pem
	I1101 01:01:38.914088   58676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145042.pem
	I1101 01:01:38.919323   58676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145042.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 01:01:38.928836   58676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 01:01:38.937988   58676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:01:38.943540   58676 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:01:38.943607   58676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:01:38.949543   58676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 01:01:38.959098   58676 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 01:01:38.963149   58676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 01:01:38.968868   58676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 01:01:38.974315   58676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 01:01:38.979746   58676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 01:01:38.985852   58676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 01:01:38.991864   58676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 01:01:38.998153   58676 kubeadm.go:404] StartCluster: {Name:no-preload-008483 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:no-preload-008483 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.140 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 01:01:38.998271   58676 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 01:01:38.998340   58676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:01:39.045797   58676 cri.go:89] found id: ""
	I1101 01:01:39.045870   58676 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 01:01:39.056166   58676 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1101 01:01:39.056197   58676 kubeadm.go:636] restartCluster start
	I1101 01:01:39.056252   58676 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 01:01:39.065191   58676 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:39.066337   58676 kubeconfig.go:92] found "no-preload-008483" server: "https://192.168.50.140:8443"
	I1101 01:01:39.068843   58676 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 01:01:39.077558   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:39.077606   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:39.088105   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:39.088123   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:39.088168   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:39.100203   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:39.600957   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:39.601029   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:39.612652   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:40.101101   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:40.101191   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:40.113249   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:40.600487   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:40.600552   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:40.612183   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:42.002176   58823 kubeadm.go:787] kubelet initialised
	I1101 01:01:42.002198   58823 kubeadm.go:788] duration metric: took 34.582278796s waiting for restarted kubelet to initialise ...
	I1101 01:01:42.002211   58823 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:01:42.007691   58823 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-m8mn8" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.012657   58823 pod_ready.go:92] pod "coredns-5644d7b6d9-m8mn8" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:42.012677   58823 pod_ready.go:81] duration metric: took 4.961011ms waiting for pod "coredns-5644d7b6d9-m8mn8" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.012687   58823 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-swhtm" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.017099   58823 pod_ready.go:92] pod "coredns-5644d7b6d9-swhtm" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:42.017124   58823 pod_ready.go:81] duration metric: took 4.429709ms waiting for pod "coredns-5644d7b6d9-swhtm" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.017137   58823 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.021376   58823 pod_ready.go:92] pod "etcd-old-k8s-version-330042" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:42.021403   58823 pod_ready.go:81] duration metric: took 4.25772ms waiting for pod "etcd-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.021415   58823 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.026057   58823 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-330042" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:42.026080   58823 pod_ready.go:81] duration metric: took 4.65685ms waiting for pod "kube-apiserver-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.026096   58823 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.401057   58823 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-330042" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:42.401085   58823 pod_ready.go:81] duration metric: took 374.980275ms waiting for pod "kube-controller-manager-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.401099   58823 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-h86m8" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:40.487876   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:42.488609   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:41.504485   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:44.005180   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:41.100662   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:41.100773   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:41.113339   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:41.601121   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:41.601195   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:41.613986   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:42.101110   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:42.101188   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:42.113963   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:42.600356   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:42.600458   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:42.612154   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:43.100679   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:43.100767   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:43.113009   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:43.601328   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:43.601402   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:43.612862   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:44.101146   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:44.101261   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:44.113407   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:44.600812   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:44.600955   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:44.613161   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:45.100665   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:45.100769   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:45.112905   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:45.600416   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:45.600515   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:45.612930   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:42.801878   58823 pod_ready.go:92] pod "kube-proxy-h86m8" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:42.801899   58823 pod_ready.go:81] duration metric: took 400.793617ms waiting for pod "kube-proxy-h86m8" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.801907   58823 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:43.201586   58823 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-330042" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:43.201618   58823 pod_ready.go:81] duration metric: took 399.702904ms waiting for pod "kube-scheduler-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:43.201632   58823 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:45.508037   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:44.489092   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:46.493162   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:46.506251   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:49.004539   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:46.100957   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:46.101023   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:46.113645   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:46.600681   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:46.600781   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:46.612564   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:47.101090   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:47.101156   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:47.113500   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:47.601105   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:47.601244   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:47.613091   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:48.100608   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:48.100725   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:48.112995   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:48.600520   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:48.600603   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:48.612240   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:49.077973   58676 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1101 01:01:49.078017   58676 kubeadm.go:1128] stopping kube-system containers ...
	I1101 01:01:49.078031   58676 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 01:01:49.078097   58676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:01:49.117615   58676 cri.go:89] found id: ""
	I1101 01:01:49.117689   58676 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 01:01:49.133583   58676 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:01:49.142851   58676 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:01:49.142922   58676 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:01:49.151952   58676 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1101 01:01:49.151973   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:49.270827   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:50.046638   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:50.252510   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:50.327660   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:50.398419   58676 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:01:50.398511   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:50.415262   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:50.931672   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:47.508466   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:49.509032   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:51.510816   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:48.987561   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:50.989519   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:52.989978   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:51.004704   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:53.006138   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:51.431168   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:51.931127   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:52.431292   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:52.462617   58676 api_server.go:72] duration metric: took 2.064198698s to wait for apiserver process to appear ...
	I1101 01:01:52.462644   58676 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:01:52.462658   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:52.463297   58676 api_server.go:269] stopped: https://192.168.50.140:8443/healthz: Get "https://192.168.50.140:8443/healthz": dial tcp 192.168.50.140:8443: connect: connection refused
	I1101 01:01:52.463360   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:52.463831   58676 api_server.go:269] stopped: https://192.168.50.140:8443/healthz: Get "https://192.168.50.140:8443/healthz": dial tcp 192.168.50.140:8443: connect: connection refused
	I1101 01:01:52.964290   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:54.007720   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:56.012280   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:56.353340   58676 api_server.go:279] https://192.168.50.140:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 01:01:56.353399   58676 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 01:01:56.353416   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:56.404133   58676 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:01:56.404176   58676 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:01:56.464272   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:56.470496   58676 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:01:56.470553   58676 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:01:56.964058   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:56.975831   58676 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:01:56.975877   58676 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:01:57.464038   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:57.472652   58676 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:01:57.472697   58676 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:01:57.964020   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:57.970866   58676 api_server.go:279] https://192.168.50.140:8443/healthz returned 200:
	ok
	I1101 01:01:57.979612   58676 api_server.go:141] control plane version: v1.28.3
	I1101 01:01:57.979641   58676 api_server.go:131] duration metric: took 5.516990946s to wait for apiserver health ...
	I1101 01:01:57.979650   58676 cni.go:84] Creating CNI manager for ""
	I1101 01:01:57.979657   58676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:01:57.981694   58676 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:01:54.990377   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:57.489817   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:55.505767   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:57.505977   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:00.004800   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:57.983198   58676 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:01:58.006916   58676 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:01:58.035969   58676 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:01:58.047783   58676 system_pods.go:59] 8 kube-system pods found
	I1101 01:01:58.047833   58676 system_pods.go:61] "coredns-5dd5756b68-kcjf2" [e5cba8fe-f5c0-48cd-a21b-649caf4405cd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 01:01:58.047848   58676 system_pods.go:61] "etcd-no-preload-008483" [6e8ce64d-5c27-4528-9ecb-4bd1c2ab55c9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 01:01:58.047868   58676 system_pods.go:61] "kube-apiserver-no-preload-008483" [c320b03e-f364-4b38-8f09-5239d66f90e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 01:01:58.047881   58676 system_pods.go:61] "kube-controller-manager-no-preload-008483" [b89beee3-61e6-4efa-926f-43ae6a50e44b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 01:01:58.047893   58676 system_pods.go:61] "kube-proxy-xjfsj" [a7195683-b9ee-440c-82e6-efcd325a35e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 01:01:58.047907   58676 system_pods.go:61] "kube-scheduler-no-preload-008483" [d8c6a1f5-ceca-46af-9a40-22053f5387b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 01:01:58.047920   58676 system_pods.go:61] "metrics-server-57f55c9bc5-49wtw" [b87d5491-9981-48d5-9cf8-34dbd4b24435] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:01:58.047946   58676 system_pods.go:61] "storage-provisioner" [bf9d5910-ae5f-48f9-9358-54b2068c2e2c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 01:01:58.047959   58676 system_pods.go:74] duration metric: took 11.96541ms to wait for pod list to return data ...
	I1101 01:01:58.047971   58676 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:01:58.052170   58676 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:01:58.052205   58676 node_conditions.go:123] node cpu capacity is 2
	I1101 01:01:58.052218   58676 node_conditions.go:105] duration metric: took 4.239786ms to run NodePressure ...
	I1101 01:01:58.052237   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:58.340580   58676 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1101 01:01:58.351480   58676 kubeadm.go:787] kubelet initialised
	I1101 01:01:58.351510   58676 kubeadm.go:788] duration metric: took 10.903426ms waiting for restarted kubelet to initialise ...
	I1101 01:01:58.351520   58676 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:01:58.359099   58676 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-kcjf2" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:00.383123   58676 pod_ready.go:102] pod "coredns-5dd5756b68-kcjf2" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:58.509858   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:01.009429   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:59.988392   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:01.989042   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:02.505009   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:05.004485   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:02.880623   58676 pod_ready.go:102] pod "coredns-5dd5756b68-kcjf2" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:04.878534   58676 pod_ready.go:92] pod "coredns-5dd5756b68-kcjf2" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:04.878556   58676 pod_ready.go:81] duration metric: took 6.519426334s waiting for pod "coredns-5dd5756b68-kcjf2" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:04.878565   58676 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:03.508377   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:05.508570   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:03.990099   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:06.488196   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:07.005182   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:09.505205   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:06.907992   58676 pod_ready.go:102] pod "etcd-no-preload-008483" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:09.400005   58676 pod_ready.go:102] pod "etcd-no-preload-008483" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:09.900354   58676 pod_ready.go:92] pod "etcd-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:09.900379   58676 pod_ready.go:81] duration metric: took 5.021808339s waiting for pod "etcd-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.900394   58676 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.906496   58676 pod_ready.go:92] pod "kube-apiserver-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:09.906520   58676 pod_ready.go:81] duration metric: took 6.117499ms waiting for pod "kube-apiserver-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.906532   58676 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.911830   58676 pod_ready.go:92] pod "kube-controller-manager-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:09.911850   58676 pod_ready.go:81] duration metric: took 5.311751ms waiting for pod "kube-controller-manager-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.911859   58676 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xjfsj" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.916419   58676 pod_ready.go:92] pod "kube-proxy-xjfsj" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:09.916442   58676 pod_ready.go:81] duration metric: took 4.576855ms waiting for pod "kube-proxy-xjfsj" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.916454   58676 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.921501   58676 pod_ready.go:92] pod "kube-scheduler-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:09.921525   58676 pod_ready.go:81] duration metric: took 5.064522ms waiting for pod "kube-scheduler-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.921536   58676 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:07.514883   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:10.008399   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:08.490011   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:10.988504   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:12.989076   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:11.507014   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:13.509053   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:12.205003   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:14.705621   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:12.509113   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:15.009543   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:15.487844   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:17.488178   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:16.003423   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:18.003597   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:20.004472   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:17.205434   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:19.214743   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:17.508997   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:20.008838   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:22.009023   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:19.488902   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:21.988210   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:22.004908   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:24.503394   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:21.704199   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:23.704855   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:25.705319   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:24.508980   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:27.008249   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:23.988985   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:26.489079   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:26.504752   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:28.505579   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:27.709065   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:30.205608   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:29.507299   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:31.509017   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:28.988567   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:31.488567   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:30.507770   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:33.005199   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:32.707783   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:35.206392   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:34.007977   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:36.008250   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:33.988120   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:36.489908   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:35.503482   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:37.504132   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:39.504348   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:37.704511   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:39.705791   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:38.008778   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:40.509040   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:38.987615   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:40.988646   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:42.005253   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:44.008492   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:42.206082   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:44.704875   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:43.009095   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:45.508557   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:43.489792   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:45.987971   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:47.989322   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:46.504096   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:49.004605   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:47.205736   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:49.704264   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:47.510014   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:50.009950   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:50.489334   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:52.987877   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:51.005543   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:53.504243   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:52.205173   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:54.704843   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:52.509247   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:55.009346   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:55.488330   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:57.987845   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:55.504494   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:58.003674   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:00.004598   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:57.205092   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:59.705637   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:57.522422   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:00.007902   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:02.009964   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:59.987956   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:01.989730   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:02.005953   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:04.007095   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:02.205761   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:04.704065   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:04.508531   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:06.512303   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:04.487667   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:06.487854   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:06.503630   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:08.504993   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:06.704568   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:08.705012   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:09.008519   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:11.509450   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:08.488843   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:10.987614   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:12.989824   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:10.505932   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:13.005799   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:11.203683   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:13.204241   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:15.705287   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:14.008244   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:16.009433   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:15.488278   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:17.988683   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:15.503739   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:17.506253   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:20.004613   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:18.204056   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:20.205312   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:18.009706   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:20.508744   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:20.490044   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:22.989002   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:22.504922   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:25.004156   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:22.704711   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:25.205072   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:23.008359   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:25.509196   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:25.487961   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:27.488324   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:27.008179   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:29.504182   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:27.205671   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:29.208402   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:27.509247   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:30.008627   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:29.988286   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:32.487504   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:31.504973   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:34.004168   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:31.704298   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:33.704452   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:32.507959   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:35.008631   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:37.009271   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:34.488458   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:36.488759   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:36.503146   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:38.504444   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:36.204750   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:38.705346   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:39.507406   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:41.509812   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:38.988439   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:41.491496   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:40.505301   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:42.506003   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:45.004872   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:41.204015   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:43.206055   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:45.705597   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:44.008441   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:46.009900   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:43.987813   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:45.988508   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:47.989201   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:47.505799   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:49.506424   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:48.204686   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:50.704155   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:48.511303   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:51.008360   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:50.488123   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:52.488356   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:52.004387   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:54.505016   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:52.705891   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:54.706732   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:53.008988   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:55.507791   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:54.988620   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:56.990186   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:57.005565   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:59.505220   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:57.205342   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:59.215160   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:57.508013   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:59.509883   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:01.510115   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:59.490512   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:01.988008   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:02.004869   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:04.503903   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:01.704963   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:04.204466   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:04.007146   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:06.007815   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:04.488270   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:06.987544   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:06.505818   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:09.006093   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:06.205560   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:08.703961   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:10.705037   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:08.008817   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:10.508585   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:08.988223   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:10.989742   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:12.990669   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:11.503914   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:13.504018   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:13.206290   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:15.704820   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:13.008696   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:15.010312   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:15.487596   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:17.489381   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:15.505665   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:18.004825   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:20.004966   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:18.205022   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:20.703582   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:17.508842   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:20.008489   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:22.008572   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:19.988378   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:22.490000   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:22.005055   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:24.504050   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:22.704263   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:24.704479   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:24.507893   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:27.009371   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:24.988500   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:27.490306   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:26.504850   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:29.003907   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:27.204442   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:29.204906   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:29.508234   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:31.508285   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:29.988549   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:32.490618   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:31.504800   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:33.506025   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:31.704974   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:34.204565   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:33.512784   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:36.009709   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:34.988579   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:37.491535   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:36.011080   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:38.503881   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:36.204772   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:38.205329   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:40.707128   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:38.509404   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:41.009915   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:39.988897   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:42.487751   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:40.504606   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:42.504912   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:44.505101   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:43.205005   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:45.207096   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:43.507714   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:45.508866   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:44.988852   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:47.488268   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:47.004069   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:49.005029   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:47.704762   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:49.705584   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:48.009495   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:50.508392   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:49.488880   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:51.988841   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:51.504680   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:54.010010   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:52.204554   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:54.705101   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:53.008194   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:55.008373   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:57.009351   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:54.489702   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:56.389066   58730 pod_ready.go:81] duration metric: took 4m0.000951404s waiting for pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace to be "Ready" ...
	E1101 01:04:56.389116   58730 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1101 01:04:56.389139   58730 pod_ready.go:38] duration metric: took 4m11.103640013s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:04:56.389173   58730 kubeadm.go:640] restartCluster took 4m34.207263569s
	W1101 01:04:56.389254   58730 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1101 01:04:56.389292   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1101 01:04:56.504421   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:58.505542   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:56.705911   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:58.706099   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:00.706478   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:59.509462   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:02.009472   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:00.509320   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:03.007708   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:03.203884   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:05.204356   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:04.009580   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:06.508160   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:05.505057   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:07.506811   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:10.004080   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:07.205229   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:09.206089   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:08.509319   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:11.009099   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:12.261608   58730 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (15.872291337s)
	I1101 01:05:12.261694   58730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:05:12.275334   58730 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:05:12.284969   58730 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:05:12.295834   58730 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:05:12.295881   58730 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1101 01:05:12.526039   58730 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 01:05:12.005261   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:14.005683   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:11.706864   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:14.204758   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:13.508597   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:16.008784   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:16.506282   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:19.004037   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:16.205361   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:18.704890   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:18.008878   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:20.009861   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:23.201664   58730 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1101 01:05:23.201785   58730 kubeadm.go:322] [preflight] Running pre-flight checks
	I1101 01:05:23.201920   58730 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 01:05:23.202057   58730 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 01:05:23.202178   58730 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 01:05:23.202255   58730 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 01:05:23.204179   58730 out.go:204]   - Generating certificates and keys ...
	I1101 01:05:23.204304   58730 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1101 01:05:23.204384   58730 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1101 01:05:23.204480   58730 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1101 01:05:23.204557   58730 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1101 01:05:23.204639   58730 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1101 01:05:23.204715   58730 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1101 01:05:23.204792   58730 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1101 01:05:23.204884   58730 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1101 01:05:23.205007   58730 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1101 01:05:23.205133   58730 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1101 01:05:23.205195   58730 kubeadm.go:322] [certs] Using the existing "sa" key
	I1101 01:05:23.205273   58730 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 01:05:23.205332   58730 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 01:05:23.205391   58730 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 01:05:23.205461   58730 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 01:05:23.205550   58730 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 01:05:23.205656   58730 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 01:05:23.205734   58730 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 01:05:23.207792   58730 out.go:204]   - Booting up control plane ...
	I1101 01:05:23.207914   58730 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 01:05:23.208028   58730 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 01:05:23.208124   58730 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 01:05:23.208244   58730 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 01:05:23.208322   58730 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 01:05:23.208356   58730 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1101 01:05:23.208496   58730 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 01:05:23.208569   58730 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003034 seconds
	I1101 01:05:23.208662   58730 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 01:05:23.208762   58730 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 01:05:23.208840   58730 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 01:05:23.209055   58730 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-754132 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 01:05:23.209148   58730 kubeadm.go:322] [bootstrap-token] Using token: j0j8ab.rja1mh5j9krst0k4
	I1101 01:05:23.210755   58730 out.go:204]   - Configuring RBAC rules ...
	I1101 01:05:23.210895   58730 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 01:05:23.211001   58730 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 01:05:23.211205   58730 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 01:05:23.211369   58730 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 01:05:23.211509   58730 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 01:05:23.211617   58730 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 01:05:23.211776   58730 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 01:05:23.211851   58730 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1101 01:05:23.211894   58730 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1101 01:05:23.211901   58730 kubeadm.go:322] 
	I1101 01:05:23.211985   58730 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1101 01:05:23.211992   58730 kubeadm.go:322] 
	I1101 01:05:23.212076   58730 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1101 01:05:23.212085   58730 kubeadm.go:322] 
	I1101 01:05:23.212128   58730 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1101 01:05:23.212205   58730 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 01:05:23.212256   58730 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 01:05:23.212263   58730 kubeadm.go:322] 
	I1101 01:05:23.212305   58730 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1101 01:05:23.212314   58730 kubeadm.go:322] 
	I1101 01:05:23.212352   58730 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 01:05:23.212359   58730 kubeadm.go:322] 
	I1101 01:05:23.212400   58730 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1101 01:05:23.212461   58730 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 01:05:23.212568   58730 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 01:05:23.212584   58730 kubeadm.go:322] 
	I1101 01:05:23.212699   58730 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 01:05:23.212787   58730 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1101 01:05:23.212797   58730 kubeadm.go:322] 
	I1101 01:05:23.212862   58730 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token j0j8ab.rja1mh5j9krst0k4 \
	I1101 01:05:23.212943   58730 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 \
	I1101 01:05:23.212962   58730 kubeadm.go:322] 	--control-plane 
	I1101 01:05:23.212968   58730 kubeadm.go:322] 
	I1101 01:05:23.213083   58730 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1101 01:05:23.213093   58730 kubeadm.go:322] 
	I1101 01:05:23.213202   58730 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token j0j8ab.rja1mh5j9krst0k4 \
	I1101 01:05:23.213346   58730 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 
	I1101 01:05:23.213366   58730 cni.go:84] Creating CNI manager for ""
	I1101 01:05:23.213375   58730 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:05:23.215058   58730 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:05:23.216515   58730 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:05:23.251532   58730 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:05:21.007674   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:23.505067   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:21.204745   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:23.206316   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:25.211036   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:22.507158   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:24.508157   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:26.508990   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:23.291112   58730 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 01:05:23.291192   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:23.291224   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9 minikube.k8s.io/name=embed-certs-754132 minikube.k8s.io/updated_at=2023_11_01T01_05_23_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:23.452410   58730 ops.go:34] apiserver oom_adj: -16
	I1101 01:05:23.635798   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:23.754993   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:24.350830   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:24.850468   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:25.350887   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:25.850719   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:26.350946   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:26.850869   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:27.350851   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:27.850856   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:25.507083   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:27.511273   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:29.974545   59148 pod_ready.go:81] duration metric: took 4m0.000148043s waiting for pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace to be "Ready" ...
	E1101 01:05:29.974585   59148 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1101 01:05:29.974607   59148 pod_ready.go:38] duration metric: took 4m5.715718658s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:05:29.974652   59148 kubeadm.go:640] restartCluster took 4m26.139306333s
	W1101 01:05:29.974746   59148 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1101 01:05:29.974779   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1101 01:05:27.704338   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:30.205751   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:29.008649   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:31.009235   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:28.350920   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:28.850670   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:29.350172   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:29.850241   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:30.351225   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:30.851276   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:31.350289   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:31.850999   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:32.350874   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:32.850500   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:32.708147   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:35.205568   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:33.351023   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:33.851109   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:34.351257   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:34.850212   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:35.350277   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:35.850281   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:36.350770   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:36.456508   58730 kubeadm.go:1081] duration metric: took 13.165385995s to wait for elevateKubeSystemPrivileges.
	I1101 01:05:36.456550   58730 kubeadm.go:406] StartCluster complete in 5m14.31984828s
	I1101 01:05:36.456575   58730 settings.go:142] acquiring lock: {Name:mk7f269e64dfd8d176737f993e01f6e6badafbc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:05:36.456674   58730 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 01:05:36.458488   58730 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/kubeconfig: {Name:mk08da65b6c71084e1cfafb19800038e8c8303e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:05:36.458789   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 01:05:36.458936   58730 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1101 01:05:36.459029   58730 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-754132"
	I1101 01:05:36.459061   58730 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-754132"
	W1101 01:05:36.459076   58730 addons.go:240] addon storage-provisioner should already be in state true
	I1101 01:05:36.459086   58730 addons.go:69] Setting metrics-server=true in profile "embed-certs-754132"
	I1101 01:05:36.459124   58730 addons.go:231] Setting addon metrics-server=true in "embed-certs-754132"
	I1101 01:05:36.459134   58730 host.go:66] Checking if "embed-certs-754132" exists ...
	I1101 01:05:36.459060   58730 config.go:182] Loaded profile config "embed-certs-754132": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:05:36.459062   58730 addons.go:69] Setting default-storageclass=true in profile "embed-certs-754132"
	I1101 01:05:36.459219   58730 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-754132"
	W1101 01:05:36.459138   58730 addons.go:240] addon metrics-server should already be in state true
	I1101 01:05:36.459347   58730 host.go:66] Checking if "embed-certs-754132" exists ...
	I1101 01:05:36.459588   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:05:36.459633   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:05:36.459638   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:05:36.459674   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:05:36.459689   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:05:36.459713   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:05:36.477136   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40825
	I1101 01:05:36.477207   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37895
	I1101 01:05:36.477706   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46261
	I1101 01:05:36.477874   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:05:36.477889   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:05:36.478086   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:05:36.478388   58730 main.go:141] libmachine: Using API Version  1
	I1101 01:05:36.478405   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:05:36.478540   58730 main.go:141] libmachine: Using API Version  1
	I1101 01:05:36.478561   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:05:36.478601   58730 main.go:141] libmachine: Using API Version  1
	I1101 01:05:36.478622   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:05:36.478794   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:05:36.478990   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:05:36.479037   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:05:36.479219   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetState
	I1101 01:05:36.479379   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:05:36.479412   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:05:36.479587   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:05:36.479623   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:05:36.483272   58730 addons.go:231] Setting addon default-storageclass=true in "embed-certs-754132"
	W1101 01:05:36.483295   58730 addons.go:240] addon default-storageclass should already be in state true
	I1101 01:05:36.483318   58730 host.go:66] Checking if "embed-certs-754132" exists ...
	I1101 01:05:36.483665   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:05:36.483696   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:05:36.498137   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46727
	I1101 01:05:36.498148   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37157
	I1101 01:05:36.498530   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:05:36.499000   58730 main.go:141] libmachine: Using API Version  1
	I1101 01:05:36.499024   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:05:36.499329   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:05:36.499499   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetState
	I1101 01:05:36.501223   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:05:36.503752   58730 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:05:36.505580   58730 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:05:36.505600   58730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 01:05:36.505617   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:05:36.505756   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37761
	I1101 01:05:36.506307   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:05:36.506765   58730 main.go:141] libmachine: Using API Version  1
	I1101 01:05:36.506783   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:05:36.507257   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:05:36.507303   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:05:36.507766   58730 main.go:141] libmachine: Using API Version  1
	I1101 01:05:36.507786   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:05:36.507852   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:05:36.507894   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:05:36.508136   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:05:36.508296   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetState
	I1101 01:05:36.509982   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:05:36.512303   58730 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1101 01:05:36.512065   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:05:36.513712   58730 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 01:05:36.513728   58730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 01:05:36.513749   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:05:36.512082   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:05:36.513819   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:05:36.513839   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:05:36.516632   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:05:36.516867   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:05:36.517052   58730 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa Username:docker}
	I1101 01:05:36.517489   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:05:36.518036   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:05:36.518058   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:05:36.518360   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:05:36.519431   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:05:36.519602   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:05:36.519742   58730 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa Username:docker}
	I1101 01:05:36.526881   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35481
	I1101 01:05:36.527462   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:05:36.527889   58730 main.go:141] libmachine: Using API Version  1
	I1101 01:05:36.527902   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:05:36.528341   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:05:36.528511   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetState
	I1101 01:05:36.530250   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:05:36.530539   58730 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 01:05:36.530557   58730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 01:05:36.530575   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:05:36.533671   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:05:36.534068   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:05:36.534093   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:05:36.534368   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:05:36.534596   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:05:36.534741   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:05:36.534821   58730 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa Username:docker}
	I1101 01:05:36.559098   58730 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-754132" context rescaled to 1 replicas
	I1101 01:05:36.559135   58730 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.83 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 01:05:36.561061   58730 out.go:177] * Verifying Kubernetes components...
	I1101 01:05:33.009726   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:35.507972   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:36.562382   58730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:05:36.684098   58730 node_ready.go:35] waiting up to 6m0s for node "embed-certs-754132" to be "Ready" ...
	I1101 01:05:36.684219   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 01:05:36.689836   58730 node_ready.go:49] node "embed-certs-754132" has status "Ready":"True"
	I1101 01:05:36.689863   58730 node_ready.go:38] duration metric: took 5.731179ms waiting for node "embed-certs-754132" to be "Ready" ...
	I1101 01:05:36.689875   58730 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:05:36.707509   58730 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:05:36.743671   58730 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 01:05:36.743702   58730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1101 01:05:36.764886   58730 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:36.773895   58730 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 01:05:36.810064   58730 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 01:05:36.810095   58730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 01:05:36.888833   58730 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:05:36.888854   58730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 01:05:36.892836   58730 pod_ready.go:92] pod "etcd-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:05:36.892864   58730 pod_ready.go:81] duration metric: took 127.938482ms waiting for pod "etcd-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:36.892879   58730 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:36.968554   58730 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:05:36.978210   58730 pod_ready.go:92] pod "kube-apiserver-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:05:36.978239   58730 pod_ready.go:81] duration metric: took 85.351942ms waiting for pod "kube-apiserver-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:36.978254   58730 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:37.154956   58730 pod_ready.go:92] pod "kube-controller-manager-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:05:37.154983   58730 pod_ready.go:81] duration metric: took 176.720364ms waiting for pod "kube-controller-manager-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:37.154997   58730 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cwbfz" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:38.405267   58730 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.720993157s)
	I1101 01:05:38.405302   58730 start.go:926] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1101 01:05:38.840834   58730 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.133283925s)
	I1101 01:05:38.840891   58730 main.go:141] libmachine: Making call to close driver server
	I1101 01:05:38.840906   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Close
	I1101 01:05:38.840918   58730 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.066970508s)
	I1101 01:05:38.841048   58730 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.872463156s)
	I1101 01:05:38.841085   58730 main.go:141] libmachine: Making call to close driver server
	I1101 01:05:38.841098   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Close
	I1101 01:05:38.841320   58730 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:05:38.841370   58730 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:05:38.841373   58730 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:05:38.841328   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Closing plugin on server side
	I1101 01:05:38.841400   58730 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:05:38.841412   58730 main.go:141] libmachine: Making call to close driver server
	I1101 01:05:38.841426   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Close
	I1101 01:05:38.841390   58730 main.go:141] libmachine: Making call to close driver server
	I1101 01:05:38.841442   58730 main.go:141] libmachine: Making call to close driver server
	I1101 01:05:38.841454   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Close
	I1101 01:05:38.841457   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Close
	I1101 01:05:38.841354   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Closing plugin on server side
	I1101 01:05:38.844717   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Closing plugin on server side
	I1101 01:05:38.844730   58730 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:05:38.844723   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Closing plugin on server side
	I1101 01:05:38.844744   58730 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:05:38.844753   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Closing plugin on server side
	I1101 01:05:38.844757   58730 addons.go:467] Verifying addon metrics-server=true in "embed-certs-754132"
	I1101 01:05:38.844763   58730 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:05:38.844774   58730 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:05:38.844773   58730 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:05:38.844789   58730 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:05:38.844799   58730 main.go:141] libmachine: Making call to close driver server
	I1101 01:05:38.844808   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Close
	I1101 01:05:38.845059   58730 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:05:38.845077   58730 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:05:38.845092   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Closing plugin on server side
	I1101 01:05:38.890752   58730 main.go:141] libmachine: Making call to close driver server
	I1101 01:05:38.890785   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Close
	I1101 01:05:38.891075   58730 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:05:38.891095   58730 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:05:38.891108   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Closing plugin on server side
	I1101 01:05:38.892878   58730 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I1101 01:05:37.706877   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:39.707206   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:38.894405   58730 addons.go:502] enable addons completed in 2.435477984s: enabled=[metrics-server storage-provisioner default-storageclass]
	I1101 01:05:39.279100   58730 pod_ready.go:102] pod "kube-proxy-cwbfz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:40.775597   58730 pod_ready.go:92] pod "kube-proxy-cwbfz" in "kube-system" namespace has status "Ready":"True"
	I1101 01:05:40.775622   58730 pod_ready.go:81] duration metric: took 3.620618998s waiting for pod "kube-proxy-cwbfz" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:40.775644   58730 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:40.782773   58730 pod_ready.go:92] pod "kube-scheduler-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:05:40.782796   58730 pod_ready.go:81] duration metric: took 7.145643ms waiting for pod "kube-scheduler-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:40.782806   58730 pod_ready.go:38] duration metric: took 4.092919772s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:05:40.782821   58730 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:05:40.782868   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:05:40.811977   58730 api_server.go:72] duration metric: took 4.252812827s to wait for apiserver process to appear ...
	I1101 01:05:40.812000   58730 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:05:40.812017   58730 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8443/healthz ...
	I1101 01:05:40.817524   58730 api_server.go:279] https://192.168.61.83:8443/healthz returned 200:
	ok
	I1101 01:05:40.819599   58730 api_server.go:141] control plane version: v1.28.3
	I1101 01:05:40.819625   58730 api_server.go:131] duration metric: took 7.617418ms to wait for apiserver health ...
	I1101 01:05:40.819636   58730 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:05:40.826677   58730 system_pods.go:59] 8 kube-system pods found
	I1101 01:05:40.826714   58730 system_pods.go:61] "coredns-5dd5756b68-6kqbc" [e03e6370-35d1-4438-8b18-d62b0a253ea6] Running
	I1101 01:05:40.826722   58730 system_pods.go:61] "etcd-embed-certs-754132" [2cd8789c-8ba8-47ea-82f2-e461cbc9d3b3] Running
	I1101 01:05:40.826729   58730 system_pods.go:61] "kube-apiserver-embed-certs-754132" [81bd13a3-37ea-4bf6-9eb9-e66318137a21] Running
	I1101 01:05:40.826735   58730 system_pods.go:61] "kube-controller-manager-embed-certs-754132" [6aa18435-1990-479b-b975-7ac1d794d967] Running
	I1101 01:05:40.826742   58730 system_pods.go:61] "kube-proxy-cwbfz" [b7f5ba1e-bd63-456b-94cc-0e2c121b7792] Running
	I1101 01:05:40.826748   58730 system_pods.go:61] "kube-scheduler-embed-certs-754132" [64203f31-7c41-42d0-9d6b-bc63e1b423cc] Running
	I1101 01:05:40.826758   58730 system_pods.go:61] "metrics-server-57f55c9bc5-499xs" [617aecda-f132-4358-9da9-bbc4fc625da0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:05:40.826773   58730 system_pods.go:61] "storage-provisioner" [7feb8931-83d0-4968-a295-a4202e8fc8c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 01:05:40.826786   58730 system_pods.go:74] duration metric: took 7.142747ms to wait for pod list to return data ...
	I1101 01:05:40.826799   58730 default_sa.go:34] waiting for default service account to be created ...
	I1101 01:05:40.831268   58730 default_sa.go:45] found service account: "default"
	I1101 01:05:40.831295   58730 default_sa.go:55] duration metric: took 4.485602ms for default service account to be created ...
	I1101 01:05:40.831309   58730 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 01:05:40.891306   58730 system_pods.go:86] 8 kube-system pods found
	I1101 01:05:40.891335   58730 system_pods.go:89] "coredns-5dd5756b68-6kqbc" [e03e6370-35d1-4438-8b18-d62b0a253ea6] Running
	I1101 01:05:40.891341   58730 system_pods.go:89] "etcd-embed-certs-754132" [2cd8789c-8ba8-47ea-82f2-e461cbc9d3b3] Running
	I1101 01:05:40.891346   58730 system_pods.go:89] "kube-apiserver-embed-certs-754132" [81bd13a3-37ea-4bf6-9eb9-e66318137a21] Running
	I1101 01:05:40.891350   58730 system_pods.go:89] "kube-controller-manager-embed-certs-754132" [6aa18435-1990-479b-b975-7ac1d794d967] Running
	I1101 01:05:40.891354   58730 system_pods.go:89] "kube-proxy-cwbfz" [b7f5ba1e-bd63-456b-94cc-0e2c121b7792] Running
	I1101 01:05:40.891358   58730 system_pods.go:89] "kube-scheduler-embed-certs-754132" [64203f31-7c41-42d0-9d6b-bc63e1b423cc] Running
	I1101 01:05:40.891366   58730 system_pods.go:89] "metrics-server-57f55c9bc5-499xs" [617aecda-f132-4358-9da9-bbc4fc625da0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:05:40.891373   58730 system_pods.go:89] "storage-provisioner" [7feb8931-83d0-4968-a295-a4202e8fc8c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 01:05:40.891381   58730 system_pods.go:126] duration metric: took 60.065984ms to wait for k8s-apps to be running ...
	I1101 01:05:40.891391   58730 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 01:05:40.891436   58730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:05:40.906845   58730 system_svc.go:56] duration metric: took 15.443235ms WaitForService to wait for kubelet.
	I1101 01:05:40.906875   58730 kubeadm.go:581] duration metric: took 4.347718478s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1101 01:05:40.906895   58730 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:05:41.089628   58730 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:05:41.089654   58730 node_conditions.go:123] node cpu capacity is 2
	I1101 01:05:41.089664   58730 node_conditions.go:105] duration metric: took 182.764311ms to run NodePressure ...
	I1101 01:05:41.089674   58730 start.go:228] waiting for startup goroutines ...
	I1101 01:05:41.089680   58730 start.go:233] waiting for cluster config update ...
	I1101 01:05:41.089693   58730 start.go:242] writing updated cluster config ...
	I1101 01:05:41.089950   58730 ssh_runner.go:195] Run: rm -f paused
	I1101 01:05:41.140594   58730 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1101 01:05:41.143142   58730 out.go:177] * Done! kubectl is now configured to use "embed-certs-754132" cluster and "default" namespace by default
	I1101 01:05:37.516552   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:40.009373   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:43.882201   59148 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.907397495s)
	I1101 01:05:43.882275   59148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:05:43.897793   59148 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:05:43.908350   59148 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:05:43.919013   59148 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:05:43.919066   59148 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1101 01:05:43.992534   59148 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1101 01:05:43.992653   59148 kubeadm.go:322] [preflight] Running pre-flight checks
	I1101 01:05:44.162750   59148 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 01:05:44.162906   59148 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 01:05:44.163052   59148 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 01:05:44.398016   59148 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 01:05:44.399998   59148 out.go:204]   - Generating certificates and keys ...
	I1101 01:05:44.400102   59148 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1101 01:05:44.400189   59148 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1101 01:05:44.400334   59148 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1101 01:05:44.400431   59148 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1101 01:05:44.400526   59148 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1101 01:05:44.400602   59148 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1101 01:05:44.400736   59148 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1101 01:05:44.400821   59148 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1101 01:05:44.401336   59148 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1101 01:05:44.401936   59148 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1101 01:05:44.402420   59148 kubeadm.go:322] [certs] Using the existing "sa" key
	I1101 01:05:44.402515   59148 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 01:05:44.470807   59148 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 01:05:44.642677   59148 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 01:05:44.768991   59148 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 01:05:45.052817   59148 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 01:05:45.053698   59148 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 01:05:45.056339   59148 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 01:05:42.204108   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:44.205679   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:42.508073   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:43.201762   58823 pod_ready.go:81] duration metric: took 4m0.000100455s waiting for pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace to be "Ready" ...
	E1101 01:05:43.201795   58823 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1101 01:05:43.201816   58823 pod_ready.go:38] duration metric: took 4m1.199592624s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:05:43.201848   58823 kubeadm.go:640] restartCluster took 4m57.555406731s
	W1101 01:05:43.201899   58823 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1101 01:05:43.201920   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1101 01:05:45.058304   59148 out.go:204]   - Booting up control plane ...
	I1101 01:05:45.058434   59148 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 01:05:45.058565   59148 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 01:05:45.060937   59148 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 01:05:45.078776   59148 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 01:05:45.079692   59148 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 01:05:45.079771   59148 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1101 01:05:45.204880   59148 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 01:05:46.208575   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:48.705698   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:50.708163   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:48.240337   58823 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.038387523s)
	I1101 01:05:48.240417   58823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:05:48.257585   58823 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:05:48.266949   58823 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:05:48.277302   58823 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:05:48.277346   58823 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1101 01:05:48.514394   58823 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 01:05:54.708746   59148 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.503354 seconds
	I1101 01:05:54.708894   59148 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 01:05:54.726194   59148 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 01:05:55.266392   59148 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 01:05:55.266670   59148 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-639310 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 01:05:55.783906   59148 kubeadm.go:322] [bootstrap-token] Using token: ilpx6n.m6vs8mqxrjuf2w8f
	I1101 01:05:53.205312   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:55.206016   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:55.786231   59148 out.go:204]   - Configuring RBAC rules ...
	I1101 01:05:55.786370   59148 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 01:05:55.793682   59148 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 01:05:55.812319   59148 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 01:05:55.819324   59148 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 01:05:55.825785   59148 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 01:05:55.831793   59148 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 01:05:55.858443   59148 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 01:05:56.195472   59148 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1101 01:05:56.248405   59148 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1101 01:05:56.249655   59148 kubeadm.go:322] 
	I1101 01:05:56.249745   59148 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1101 01:05:56.249759   59148 kubeadm.go:322] 
	I1101 01:05:56.249852   59148 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1101 01:05:56.249869   59148 kubeadm.go:322] 
	I1101 01:05:56.249931   59148 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1101 01:05:56.249992   59148 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 01:05:56.250076   59148 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 01:05:56.250088   59148 kubeadm.go:322] 
	I1101 01:05:56.250163   59148 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1101 01:05:56.250172   59148 kubeadm.go:322] 
	I1101 01:05:56.250261   59148 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 01:05:56.250281   59148 kubeadm.go:322] 
	I1101 01:05:56.250344   59148 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1101 01:05:56.250436   59148 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 01:05:56.250560   59148 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 01:05:56.250574   59148 kubeadm.go:322] 
	I1101 01:05:56.250663   59148 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 01:05:56.250757   59148 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1101 01:05:56.250769   59148 kubeadm.go:322] 
	I1101 01:05:56.250881   59148 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token ilpx6n.m6vs8mqxrjuf2w8f \
	I1101 01:05:56.251011   59148 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 \
	I1101 01:05:56.251041   59148 kubeadm.go:322] 	--control-plane 
	I1101 01:05:56.251053   59148 kubeadm.go:322] 
	I1101 01:05:56.251150   59148 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1101 01:05:56.251162   59148 kubeadm.go:322] 
	I1101 01:05:56.251259   59148 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token ilpx6n.m6vs8mqxrjuf2w8f \
	I1101 01:05:56.251383   59148 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 
	I1101 01:05:56.251922   59148 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 01:05:56.251982   59148 cni.go:84] Creating CNI manager for ""
	I1101 01:05:56.252008   59148 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:05:56.254247   59148 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:05:56.256068   59148 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:05:56.281994   59148 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:05:56.324660   59148 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 01:05:56.324796   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:56.324863   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9 minikube.k8s.io/name=default-k8s-diff-port-639310 minikube.k8s.io/updated_at=2023_11_01T01_05_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:56.739064   59148 ops.go:34] apiserver oom_adj: -16
	I1101 01:05:56.739245   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:56.834852   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:57.432044   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:57.931920   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:58.432414   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:58.932871   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:59.432755   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:59.932515   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:57.704234   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:59.705516   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:01.231970   58823 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1101 01:06:01.232062   58823 kubeadm.go:322] [preflight] Running pre-flight checks
	I1101 01:06:01.232156   58823 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 01:06:01.232289   58823 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 01:06:01.232419   58823 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 01:06:01.232595   58823 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 01:06:01.232714   58823 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 01:06:01.232790   58823 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1101 01:06:01.232890   58823 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 01:06:01.235429   58823 out.go:204]   - Generating certificates and keys ...
	I1101 01:06:01.235533   58823 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1101 01:06:01.235606   58823 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1101 01:06:01.235675   58823 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1101 01:06:01.235782   58823 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1101 01:06:01.235889   58823 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1101 01:06:01.235973   58823 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1101 01:06:01.236065   58823 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1101 01:06:01.236153   58823 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1101 01:06:01.236263   58823 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1101 01:06:01.236383   58823 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1101 01:06:01.236447   58823 kubeadm.go:322] [certs] Using the existing "sa" key
	I1101 01:06:01.236528   58823 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 01:06:01.236607   58823 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 01:06:01.236728   58823 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 01:06:01.236811   58823 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 01:06:01.236877   58823 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 01:06:01.236955   58823 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 01:06:01.238699   58823 out.go:204]   - Booting up control plane ...
	I1101 01:06:01.238808   58823 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 01:06:01.238904   58823 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 01:06:01.238990   58823 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 01:06:01.239092   58823 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 01:06:01.239289   58823 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 01:06:01.239387   58823 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.004023 seconds
	I1101 01:06:01.239528   58823 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 01:06:01.239741   58823 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 01:06:01.239796   58823 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 01:06:01.239971   58823 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-330042 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1101 01:06:01.240056   58823 kubeadm.go:322] [bootstrap-token] Using token: lseik6.3ozwuciianl7vrri
	I1101 01:06:01.241690   58823 out.go:204]   - Configuring RBAC rules ...
	I1101 01:06:01.241825   58823 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 01:06:01.242015   58823 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 01:06:01.242170   58823 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 01:06:01.242265   58823 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 01:06:01.242380   58823 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 01:06:01.242448   58823 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1101 01:06:01.242517   58823 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1101 01:06:01.242549   58823 kubeadm.go:322] 
	I1101 01:06:01.242631   58823 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1101 01:06:01.242646   58823 kubeadm.go:322] 
	I1101 01:06:01.242753   58823 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1101 01:06:01.242764   58823 kubeadm.go:322] 
	I1101 01:06:01.242801   58823 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1101 01:06:01.242883   58823 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 01:06:01.242956   58823 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 01:06:01.242965   58823 kubeadm.go:322] 
	I1101 01:06:01.243041   58823 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1101 01:06:01.243152   58823 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 01:06:01.243249   58823 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 01:06:01.243261   58823 kubeadm.go:322] 
	I1101 01:06:01.243357   58823 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1101 01:06:01.243421   58823 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1101 01:06:01.243425   58823 kubeadm.go:322] 
	I1101 01:06:01.243490   58823 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token lseik6.3ozwuciianl7vrri \
	I1101 01:06:01.243597   58823 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 \
	I1101 01:06:01.243619   58823 kubeadm.go:322]     --control-plane 	  
	I1101 01:06:01.243623   58823 kubeadm.go:322] 
	I1101 01:06:01.243697   58823 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1101 01:06:01.243702   58823 kubeadm.go:322] 
	I1101 01:06:01.243773   58823 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token lseik6.3ozwuciianl7vrri \
	I1101 01:06:01.243923   58823 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 
	I1101 01:06:01.243967   58823 cni.go:84] Creating CNI manager for ""
	I1101 01:06:01.243979   58823 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:06:01.246766   58823 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:06:01.248244   58823 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:06:01.274713   58823 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:06:01.299087   58823 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 01:06:01.299184   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:01.299241   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9 minikube.k8s.io/name=old-k8s-version-330042 minikube.k8s.io/updated_at=2023_11_01T01_06_01_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:01.350480   58823 ops.go:34] apiserver oom_adj: -16
	I1101 01:06:01.668212   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:01.795923   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:02.398955   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:00.432038   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:00.932486   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:01.431924   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:01.932050   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:02.432828   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:02.932070   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:03.432833   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:03.931826   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:04.432522   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:04.932660   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:01.705717   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:04.205431   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:02.899285   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:03.398507   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:03.898445   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:04.399301   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:04.898647   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:05.399211   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:05.899099   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:06.398426   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:06.898703   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:07.399266   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:05.431880   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:05.932001   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:06.432804   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:06.932744   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:07.432405   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:07.932531   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:08.432007   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:08.560694   59148 kubeadm.go:1081] duration metric: took 12.235943971s to wait for elevateKubeSystemPrivileges.
	I1101 01:06:08.560733   59148 kubeadm.go:406] StartCluster complete in 5m4.77698433s
	I1101 01:06:08.560756   59148 settings.go:142] acquiring lock: {Name:mk7f269e64dfd8d176737f993e01f6e6badafbc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:06:08.560862   59148 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 01:06:08.563346   59148 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/kubeconfig: {Name:mk08da65b6c71084e1cfafb19800038e8c8303e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:06:08.563655   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 01:06:08.563793   59148 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1101 01:06:08.563857   59148 config.go:182] Loaded profile config "default-k8s-diff-port-639310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:06:08.563874   59148 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-639310"
	I1101 01:06:08.563892   59148 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-639310"
	I1101 01:06:08.563905   59148 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-639310"
	I1101 01:06:08.563917   59148 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-639310"
	I1101 01:06:08.563950   59148 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-639310"
	I1101 01:06:08.563899   59148 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-639310"
	W1101 01:06:08.563962   59148 addons.go:240] addon metrics-server should already be in state true
	W1101 01:06:08.563990   59148 addons.go:240] addon storage-provisioner should already be in state true
	I1101 01:06:08.564025   59148 host.go:66] Checking if "default-k8s-diff-port-639310" exists ...
	I1101 01:06:08.564064   59148 host.go:66] Checking if "default-k8s-diff-port-639310" exists ...
	I1101 01:06:08.564369   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:08.564404   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:08.564421   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:08.564453   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:08.564455   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:08.564488   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:08.581714   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37509
	I1101 01:06:08.582180   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:08.583081   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35137
	I1101 01:06:08.583312   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:06:08.583332   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:08.583553   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41541
	I1101 01:06:08.583702   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:08.583714   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:08.583891   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:08.584174   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:06:08.584200   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:08.584272   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:08.584302   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:08.584638   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:06:08.584687   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:08.584737   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:08.584993   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:08.585152   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetState
	I1101 01:06:08.585215   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:08.585256   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:08.588703   59148 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-639310"
	W1101 01:06:08.588728   59148 addons.go:240] addon default-storageclass should already be in state true
	I1101 01:06:08.588754   59148 host.go:66] Checking if "default-k8s-diff-port-639310" exists ...
	I1101 01:06:08.589158   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:08.589193   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:08.600826   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40787
	I1101 01:06:08.601314   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:08.601952   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:06:08.601976   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:08.602335   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:08.602560   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetState
	I1101 01:06:08.603276   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35887
	I1101 01:06:08.603415   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36765
	I1101 01:06:08.603803   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:08.604098   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:08.604276   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:06:08.604290   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:08.604490   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:06:08.604506   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:08.604573   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:06:08.604778   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:08.606338   59148 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:06:08.605001   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:08.605380   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:08.607632   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:08.607705   59148 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:06:08.607717   59148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 01:06:08.607731   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:06:08.607995   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetState
	I1101 01:06:08.610424   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:06:08.612025   59148 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1101 01:06:08.613346   59148 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 01:06:08.613365   59148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 01:06:08.613386   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:06:08.611304   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:06:08.611864   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:06:08.613461   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:06:08.613508   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:06:08.613650   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:06:08.613769   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:06:08.613869   59148 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa Username:docker}
	I1101 01:06:08.618717   59148 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-639310" context rescaled to 1 replicas
	I1101 01:06:08.618755   59148 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.97 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 01:06:08.620291   59148 out.go:177] * Verifying Kubernetes components...
	I1101 01:06:08.618896   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:06:08.620048   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:06:08.621662   59148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:06:08.621747   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:06:08.621777   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:06:08.622129   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:06:08.622359   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:06:08.622526   59148 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa Username:docker}
	I1101 01:06:08.629241   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42169
	I1101 01:06:08.629773   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:08.630164   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:06:08.630181   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:08.630428   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:08.630558   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetState
	I1101 01:06:08.631892   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:06:08.632176   59148 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 01:06:08.632197   59148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 01:06:08.632216   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:06:08.634872   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:06:08.635211   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:06:08.635241   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:06:08.635375   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:06:08.635576   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:06:08.635713   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:06:08.635839   59148 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa Username:docker}
	I1101 01:06:08.984005   59148 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 01:06:08.984032   59148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1101 01:06:08.991838   59148 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-639310" to be "Ready" ...
	I1101 01:06:08.991921   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 01:06:09.011096   59148 node_ready.go:49] node "default-k8s-diff-port-639310" has status "Ready":"True"
	I1101 01:06:09.011124   59148 node_ready.go:38] duration metric: took 19.250763ms waiting for node "default-k8s-diff-port-639310" to be "Ready" ...
	I1101 01:06:09.011136   59148 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:06:09.043526   59148 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:09.071032   59148 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 01:06:09.071065   59148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 01:06:09.089683   59148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 01:06:09.090332   59148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:06:09.139676   59148 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:06:09.139702   59148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 01:06:09.219436   59148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:06:06.705499   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:09.207584   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:09.922465   58676 pod_ready.go:81] duration metric: took 4m0.000913678s waiting for pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace to be "Ready" ...
	E1101 01:06:09.922511   58676 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1101 01:06:09.922529   58676 pod_ready.go:38] duration metric: took 4m11.570999497s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:06:09.922566   58676 kubeadm.go:640] restartCluster took 4m30.866358786s
	W1101 01:06:09.922644   58676 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1101 01:06:09.922688   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1101 01:06:11.075881   59148 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.083916099s)
	I1101 01:06:11.075915   59148 start.go:926] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1101 01:06:11.075946   59148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.986221728s)
	I1101 01:06:11.075997   59148 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:11.076012   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Close
	I1101 01:06:11.076348   59148 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:11.076367   59148 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:11.076377   59148 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:11.076386   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Close
	I1101 01:06:11.076620   59148 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:11.076639   59148 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:11.119713   59148 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:11.119741   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Close
	I1101 01:06:11.120145   59148 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:11.120170   59148 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:11.120145   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | Closing plugin on server side
	I1101 01:06:11.172242   59148 pod_ready.go:102] pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:11.954880   59148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.864508967s)
	I1101 01:06:11.954945   59148 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:11.954960   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Close
	I1101 01:06:11.955014   59148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.735537793s)
	I1101 01:06:11.955074   59148 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:11.955088   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Close
	I1101 01:06:11.955379   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | Closing plugin on server side
	I1101 01:06:11.955411   59148 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:11.955418   59148 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:11.955429   59148 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:11.955438   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Close
	I1101 01:06:11.957487   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | Closing plugin on server side
	I1101 01:06:11.957532   59148 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:11.957549   59148 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:11.957537   59148 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:11.957612   59148 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:11.957566   59148 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-639310"
	I1101 01:06:11.957643   59148 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:11.957672   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Close
	I1101 01:06:11.958036   59148 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:11.958063   59148 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:11.960489   59148 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I1101 01:06:07.899402   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:08.398731   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:08.898547   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:09.399015   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:09.898437   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:10.399024   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:10.899108   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:11.398482   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:11.898943   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:12.399022   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:11.962129   59148 addons.go:502] enable addons completed in 3.39833009s: enabled=[default-storageclass metrics-server storage-provisioner]
	I1101 01:06:13.684297   59148 pod_ready.go:102] pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:12.899212   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:13.398415   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:13.898444   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:14.398630   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:14.898427   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:15.399212   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:15.898869   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:16.399289   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:16.588122   58823 kubeadm.go:1081] duration metric: took 15.28901357s to wait for elevateKubeSystemPrivileges.
	I1101 01:06:16.588166   58823 kubeadm.go:406] StartCluster complete in 5m31.002121514s
	I1101 01:06:16.588190   58823 settings.go:142] acquiring lock: {Name:mk7f269e64dfd8d176737f993e01f6e6badafbc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:06:16.588290   58823 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 01:06:16.590925   58823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/kubeconfig: {Name:mk08da65b6c71084e1cfafb19800038e8c8303e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:06:16.591235   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 01:06:16.591339   58823 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1101 01:06:16.591416   58823 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-330042"
	I1101 01:06:16.591436   58823 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-330042"
	W1101 01:06:16.591444   58823 addons.go:240] addon storage-provisioner should already be in state true
	I1101 01:06:16.591477   58823 config.go:182] Loaded profile config "old-k8s-version-330042": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1101 01:06:16.591517   58823 host.go:66] Checking if "old-k8s-version-330042" exists ...
	I1101 01:06:16.591525   58823 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-330042"
	I1101 01:06:16.591541   58823 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-330042"
	I1101 01:06:16.591923   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:16.591924   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:16.591962   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:16.591980   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:16.592045   58823 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-330042"
	I1101 01:06:16.592064   58823 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-330042"
	W1101 01:06:16.592071   58823 addons.go:240] addon metrics-server should already be in state true
	I1101 01:06:16.592104   58823 host.go:66] Checking if "old-k8s-version-330042" exists ...
	I1101 01:06:16.592424   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:16.592468   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:16.610602   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35459
	I1101 01:06:16.611188   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:16.611722   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:06:16.611752   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:16.611893   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35425
	I1101 01:06:16.612233   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:16.612315   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:16.612802   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:16.612841   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:16.613196   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:06:16.613215   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:16.613550   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:16.613571   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39319
	I1101 01:06:16.613949   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:16.614126   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:16.614159   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:16.614425   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:06:16.614438   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:16.614811   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:16.614997   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetState
	I1101 01:06:16.617747   58823 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-330042"
	W1101 01:06:16.617763   58823 addons.go:240] addon default-storageclass should already be in state true
	I1101 01:06:16.617783   58823 host.go:66] Checking if "old-k8s-version-330042" exists ...
	I1101 01:06:16.618021   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:16.618044   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:16.633877   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37903
	I1101 01:06:16.634227   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34049
	I1101 01:06:16.634436   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:16.635052   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:16.635225   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:06:16.635251   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:16.635588   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:06:16.635603   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:16.635656   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:16.636032   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:16.636092   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetState
	I1101 01:06:16.636310   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetState
	I1101 01:06:16.637897   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:06:16.640069   58823 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:06:16.638479   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:06:16.640887   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35501
	I1101 01:06:16.641511   58823 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:06:16.641523   58823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 01:06:16.641540   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:06:16.642477   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:16.643099   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:06:16.643115   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:16.643826   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:16.644397   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:16.644432   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:16.644515   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:06:16.644534   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:06:16.644549   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:06:16.644743   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:06:16.644908   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:06:16.645006   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:06:16.645102   58823 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa Username:docker}
	I1101 01:06:16.648901   58823 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1101 01:06:16.650287   58823 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 01:06:16.650299   58823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 01:06:16.650316   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:06:16.654323   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:06:16.654694   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:06:16.654720   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:06:16.655020   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:06:16.655268   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:06:16.655450   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:06:16.655600   58823 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa Username:docker}
	I1101 01:06:16.663888   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32991
	I1101 01:06:16.664490   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:16.665023   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:06:16.665049   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:16.665533   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:16.665720   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetState
	I1101 01:06:16.667516   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:06:16.667817   58823 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 01:06:16.667837   58823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 01:06:16.667856   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:06:16.670789   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:06:16.671306   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:06:16.671332   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:06:16.671519   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:06:16.671688   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:06:16.671811   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:06:16.671974   58823 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa Username:docker}
	I1101 01:06:16.738145   58823 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-330042" context rescaled to 1 replicas
	I1101 01:06:16.738193   58823 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 01:06:16.740269   58823 out.go:177] * Verifying Kubernetes components...
	I1101 01:06:16.741889   58823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:06:16.827316   58823 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 01:06:16.827347   58823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1101 01:06:16.846888   58823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:06:16.868760   58823 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-330042" to be "Ready" ...
	I1101 01:06:16.868848   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 01:06:16.885920   58823 node_ready.go:49] node "old-k8s-version-330042" has status "Ready":"True"
	I1101 01:06:16.885962   58823 node_ready.go:38] duration metric: took 17.171382ms waiting for node "old-k8s-version-330042" to be "Ready" ...
	I1101 01:06:16.885975   58823 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:06:16.907070   58823 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-v2xlz" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:16.929166   58823 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 01:06:16.929190   58823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 01:06:16.946209   58823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 01:06:17.010599   58823 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:06:17.010628   58823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 01:06:17.132054   58823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:06:17.868039   58823 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1101 01:06:17.868039   58823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.021104248s)
	I1101 01:06:17.868120   58823 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:17.868126   58823 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:17.868140   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Close
	I1101 01:06:17.868142   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Close
	I1101 01:06:17.870315   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Closing plugin on server side
	I1101 01:06:17.870338   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Closing plugin on server side
	I1101 01:06:17.870352   58823 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:17.870364   58823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:17.870378   58823 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:17.870400   58823 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:17.870429   58823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:17.870439   58823 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:17.870448   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Close
	I1101 01:06:17.870470   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Close
	I1101 01:06:17.870865   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Closing plugin on server side
	I1101 01:06:17.870866   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Closing plugin on server side
	I1101 01:06:17.870876   58823 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:17.870890   58823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:17.870899   58823 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:17.870915   58823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:17.920542   58823 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:17.920570   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Close
	I1101 01:06:17.920923   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Closing plugin on server side
	I1101 01:06:17.920969   58823 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:17.920980   58823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:18.189030   58823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.056928538s)
	I1101 01:06:18.189096   58823 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:18.189109   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Close
	I1101 01:06:18.189446   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Closing plugin on server side
	I1101 01:06:18.189464   58823 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:18.189476   58823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:18.189486   58823 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:18.189506   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Close
	I1101 01:06:18.189735   58823 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:18.189752   58823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:18.189760   58823 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-330042"
	I1101 01:06:18.192103   58823 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1101 01:06:16.156689   59148 pod_ready.go:102] pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:18.158318   59148 pod_ready.go:102] pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:18.194035   58823 addons.go:502] enable addons completed in 1.602699312s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1101 01:06:18.978162   58823 pod_ready.go:102] pod "coredns-5644d7b6d9-v2xlz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:21.456448   58823 pod_ready.go:102] pod "coredns-5644d7b6d9-v2xlz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:20.657398   59148 pod_ready.go:102] pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:22.156680   59148 pod_ready.go:97] pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.72.97 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-11-01 01:06:08 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerSt
ateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-11-01 01:06:11 +0000 UTC,FinishedAt:2023-11-01 01:06:21 +0000 UTC,ContainerID:cri-o://1ecc4b16207e32548d5d59a4bb7a01519d7e5eaf75b05171abd6c8c635656811,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://1ecc4b16207e32548d5d59a4bb7a01519d7e5eaf75b05171abd6c8c635656811 Started:0xc002af16c0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1101 01:06:22.156709   59148 pod_ready.go:81] duration metric: took 13.113156669s waiting for pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace to be "Ready" ...
	E1101 01:06:22.156718   59148 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.72.97 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-11-01 01:06:08 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Runnin
g:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-11-01 01:06:11 +0000 UTC,FinishedAt:2023-11-01 01:06:21 +0000 UTC,ContainerID:cri-o://1ecc4b16207e32548d5d59a4bb7a01519d7e5eaf75b05171abd6c8c635656811,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://1ecc4b16207e32548d5d59a4bb7a01519d7e5eaf75b05171abd6c8c635656811 Started:0xc002af16c0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1101 01:06:22.156726   59148 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rgzt8" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.163387   59148 pod_ready.go:92] pod "coredns-5dd5756b68-rgzt8" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:22.163410   59148 pod_ready.go:81] duration metric: took 6.677078ms waiting for pod "coredns-5dd5756b68-rgzt8" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.163423   59148 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.168499   59148 pod_ready.go:92] pod "etcd-default-k8s-diff-port-639310" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:22.168519   59148 pod_ready.go:81] duration metric: took 5.088683ms waiting for pod "etcd-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.168528   59148 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.174117   59148 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-639310" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:22.174143   59148 pod_ready.go:81] duration metric: took 5.607251ms waiting for pod "kube-apiserver-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.174157   59148 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.179321   59148 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-639310" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:22.179344   59148 pod_ready.go:81] duration metric: took 5.178241ms waiting for pod "kube-controller-manager-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.179356   59148 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kzgzn" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.554016   59148 pod_ready.go:92] pod "kube-proxy-kzgzn" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:22.554047   59148 pod_ready.go:81] duration metric: took 374.683914ms waiting for pod "kube-proxy-kzgzn" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.554061   59148 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.954192   59148 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-639310" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:22.954216   59148 pod_ready.go:81] duration metric: took 400.146517ms waiting for pod "kube-scheduler-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.954226   59148 pod_ready.go:38] duration metric: took 13.943077925s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:06:22.954243   59148 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:06:22.954294   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:06:22.970594   59148 api_server.go:72] duration metric: took 14.351804953s to wait for apiserver process to appear ...
	I1101 01:06:22.970621   59148 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:06:22.970638   59148 api_server.go:253] Checking apiserver healthz at https://192.168.72.97:8444/healthz ...
	I1101 01:06:22.976061   59148 api_server.go:279] https://192.168.72.97:8444/healthz returned 200:
	ok
	I1101 01:06:22.977368   59148 api_server.go:141] control plane version: v1.28.3
	I1101 01:06:22.977390   59148 api_server.go:131] duration metric: took 6.761145ms to wait for apiserver health ...
	I1101 01:06:22.977398   59148 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:06:23.156987   59148 system_pods.go:59] 8 kube-system pods found
	I1101 01:06:23.157014   59148 system_pods.go:61] "coredns-5dd5756b68-rgzt8" [6d136c6a-e0b2-44c3-a17b-85649d6ff7b7] Running
	I1101 01:06:23.157018   59148 system_pods.go:61] "etcd-default-k8s-diff-port-639310" [9cc2eba7-c72f-4a6f-9c55-8cce5586b574] Running
	I1101 01:06:23.157024   59148 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-639310" [e2b16d1e-af9f-452e-8243-5267f781ab19] Running
	I1101 01:06:23.157028   59148 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-639310" [9173e21f-a13f-4234-94a1-1976881ee23d] Running
	I1101 01:06:23.157034   59148 system_pods.go:61] "kube-proxy-kzgzn" [32d59980-f28a-482c-9aa8-8502915417f0] Running
	I1101 01:06:23.157038   59148 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-639310" [449df462-911a-4afa-8ca5-f9fccce9ecac] Running
	I1101 01:06:23.157046   59148 system_pods.go:61] "metrics-server-57f55c9bc5-65ph4" [4683706e-65f6-4845-a5ad-60da8cd20d8e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:23.157053   59148 system_pods.go:61] "storage-provisioner" [eaba9583-e564-4804-9cd3-2b4de36c85da] Running
	I1101 01:06:23.157060   59148 system_pods.go:74] duration metric: took 179.656649ms to wait for pod list to return data ...
	I1101 01:06:23.157067   59148 default_sa.go:34] waiting for default service account to be created ...
	I1101 01:06:23.352990   59148 default_sa.go:45] found service account: "default"
	I1101 01:06:23.353024   59148 default_sa.go:55] duration metric: took 195.950242ms for default service account to be created ...
	I1101 01:06:23.353034   59148 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 01:06:23.557472   59148 system_pods.go:86] 8 kube-system pods found
	I1101 01:06:23.557498   59148 system_pods.go:89] "coredns-5dd5756b68-rgzt8" [6d136c6a-e0b2-44c3-a17b-85649d6ff7b7] Running
	I1101 01:06:23.557505   59148 system_pods.go:89] "etcd-default-k8s-diff-port-639310" [9cc2eba7-c72f-4a6f-9c55-8cce5586b574] Running
	I1101 01:06:23.557512   59148 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-639310" [e2b16d1e-af9f-452e-8243-5267f781ab19] Running
	I1101 01:06:23.557518   59148 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-639310" [9173e21f-a13f-4234-94a1-1976881ee23d] Running
	I1101 01:06:23.557524   59148 system_pods.go:89] "kube-proxy-kzgzn" [32d59980-f28a-482c-9aa8-8502915417f0] Running
	I1101 01:06:23.557531   59148 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-639310" [449df462-911a-4afa-8ca5-f9fccce9ecac] Running
	I1101 01:06:23.557541   59148 system_pods.go:89] "metrics-server-57f55c9bc5-65ph4" [4683706e-65f6-4845-a5ad-60da8cd20d8e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:23.557554   59148 system_pods.go:89] "storage-provisioner" [eaba9583-e564-4804-9cd3-2b4de36c85da] Running
	I1101 01:06:23.557561   59148 system_pods.go:126] duration metric: took 204.522772ms to wait for k8s-apps to be running ...
	I1101 01:06:23.557571   59148 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 01:06:23.557614   59148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:06:23.572950   59148 system_svc.go:56] duration metric: took 15.367105ms WaitForService to wait for kubelet.
	I1101 01:06:23.572979   59148 kubeadm.go:581] duration metric: took 14.954198383s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1101 01:06:23.572995   59148 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:06:23.754816   59148 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:06:23.754852   59148 node_conditions.go:123] node cpu capacity is 2
	I1101 01:06:23.754865   59148 node_conditions.go:105] duration metric: took 181.864765ms to run NodePressure ...
	I1101 01:06:23.754879   59148 start.go:228] waiting for startup goroutines ...
	I1101 01:06:23.754887   59148 start.go:233] waiting for cluster config update ...
	I1101 01:06:23.754902   59148 start.go:242] writing updated cluster config ...
	I1101 01:06:23.755221   59148 ssh_runner.go:195] Run: rm -f paused
	I1101 01:06:23.805298   59148 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1101 01:06:23.807226   59148 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-639310" cluster and "default" namespace by default
	I1101 01:06:24.353352   58676 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.430634921s)
	I1101 01:06:24.353418   58676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:06:24.367115   58676 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:06:24.376272   58676 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:06:24.385067   58676 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:06:24.385105   58676 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1101 01:06:24.436586   58676 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1101 01:06:24.436698   58676 kubeadm.go:322] [preflight] Running pre-flight checks
	I1101 01:06:24.592267   58676 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 01:06:24.592409   58676 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 01:06:24.592529   58676 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 01:06:24.834834   58676 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 01:06:24.836680   58676 out.go:204]   - Generating certificates and keys ...
	I1101 01:06:24.836825   58676 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1101 01:06:24.836918   58676 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1101 01:06:24.837052   58676 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1101 01:06:24.837150   58676 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1101 01:06:24.837378   58676 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1101 01:06:24.838501   58676 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1101 01:06:24.838970   58676 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1101 01:06:24.839488   58676 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1101 01:06:24.840058   58676 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1101 01:06:24.840454   58676 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1101 01:06:24.840925   58676 kubeadm.go:322] [certs] Using the existing "sa" key
	I1101 01:06:24.841017   58676 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 01:06:25.117460   58676 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 01:06:25.218894   58676 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 01:06:25.319416   58676 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 01:06:25.555023   58676 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 01:06:25.555490   58676 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 01:06:25.558041   58676 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 01:06:25.559946   58676 out.go:204]   - Booting up control plane ...
	I1101 01:06:25.560090   58676 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 01:06:25.560212   58676 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 01:06:25.560321   58676 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 01:06:25.577307   58676 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 01:06:25.580427   58676 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 01:06:25.580508   58676 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1101 01:06:25.710362   58676 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 01:06:23.963710   58823 pod_ready.go:102] pod "coredns-5644d7b6d9-v2xlz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:26.455851   58823 pod_ready.go:92] pod "coredns-5644d7b6d9-v2xlz" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:26.455880   58823 pod_ready.go:81] duration metric: took 9.548782268s waiting for pod "coredns-5644d7b6d9-v2xlz" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:26.455889   58823 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hkl2m" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:26.461243   58823 pod_ready.go:92] pod "kube-proxy-hkl2m" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:26.461277   58823 pod_ready.go:81] duration metric: took 5.380815ms waiting for pod "kube-proxy-hkl2m" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:26.461289   58823 pod_ready.go:38] duration metric: took 9.575303239s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:06:26.461314   58823 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:06:26.461372   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:06:26.476212   58823 api_server.go:72] duration metric: took 9.737981323s to wait for apiserver process to appear ...
	I1101 01:06:26.476245   58823 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:06:26.476268   58823 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I1101 01:06:26.483060   58823 api_server.go:279] https://192.168.39.90:8443/healthz returned 200:
	ok
	I1101 01:06:26.484299   58823 api_server.go:141] control plane version: v1.16.0
	I1101 01:06:26.484328   58823 api_server.go:131] duration metric: took 8.074303ms to wait for apiserver health ...
	I1101 01:06:26.484342   58823 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:06:26.488710   58823 system_pods.go:59] 4 kube-system pods found
	I1101 01:06:26.488745   58823 system_pods.go:61] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:26.488753   58823 system_pods.go:61] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:26.488766   58823 system_pods.go:61] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:26.488775   58823 system_pods.go:61] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:26.488787   58823 system_pods.go:74] duration metric: took 4.438458ms to wait for pod list to return data ...
	I1101 01:06:26.488797   58823 default_sa.go:34] waiting for default service account to be created ...
	I1101 01:06:26.492513   58823 default_sa.go:45] found service account: "default"
	I1101 01:06:26.492543   58823 default_sa.go:55] duration metric: took 3.739583ms for default service account to be created ...
	I1101 01:06:26.492553   58823 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 01:06:26.496897   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:26.496924   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:26.496929   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:26.496936   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:26.496942   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:26.496956   58823 retry.go:31] will retry after 215.348005ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:26.718021   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:26.718055   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:26.718064   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:26.718080   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:26.718086   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:26.718103   58823 retry.go:31] will retry after 357.067185ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:27.080480   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:27.080519   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:27.080528   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:27.080539   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:27.080548   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:27.080572   58823 retry.go:31] will retry after 441.083478ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:27.528922   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:27.528955   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:27.528964   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:27.528975   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:27.528984   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:27.529008   58823 retry.go:31] will retry after 595.152055ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:28.129735   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:28.129760   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:28.129765   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:28.129772   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:28.129778   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:28.129794   58823 retry.go:31] will retry after 591.454083ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:28.726058   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:28.726089   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:28.726097   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:28.726108   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:28.726118   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:28.726142   58823 retry.go:31] will retry after 682.338416ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:29.414282   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:29.414311   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:29.414321   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:29.414330   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:29.414338   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:29.414356   58823 retry.go:31] will retry after 953.248535ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:30.373950   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:30.373989   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:30.373998   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:30.374017   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:30.374028   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:30.374048   58823 retry.go:31] will retry after 1.291166145s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:31.671462   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:31.671516   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:31.671526   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:31.671537   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:31.671546   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:31.671565   58823 retry.go:31] will retry after 1.413833897s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:33.713596   58676 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002646 seconds
	I1101 01:06:33.713733   58676 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 01:06:33.731994   58676 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 01:06:34.275298   58676 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 01:06:34.275497   58676 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-008483 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 01:06:34.792259   58676 kubeadm.go:322] [bootstrap-token] Using token: ft1765.cra2ecqpjz8r5s0a
	I1101 01:06:34.793944   58676 out.go:204]   - Configuring RBAC rules ...
	I1101 01:06:34.794105   58676 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 01:06:34.800902   58676 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 01:06:34.811310   58676 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 01:06:34.821309   58676 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 01:06:34.826523   58676 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 01:06:34.832305   58676 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 01:06:34.852131   58676 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 01:06:35.137771   58676 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1101 01:06:35.206006   58676 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1101 01:06:35.207223   58676 kubeadm.go:322] 
	I1101 01:06:35.207316   58676 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1101 01:06:35.207327   58676 kubeadm.go:322] 
	I1101 01:06:35.207404   58676 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1101 01:06:35.207413   58676 kubeadm.go:322] 
	I1101 01:06:35.207448   58676 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1101 01:06:35.207528   58676 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 01:06:35.207619   58676 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 01:06:35.207640   58676 kubeadm.go:322] 
	I1101 01:06:35.207703   58676 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1101 01:06:35.207722   58676 kubeadm.go:322] 
	I1101 01:06:35.207796   58676 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 01:06:35.207805   58676 kubeadm.go:322] 
	I1101 01:06:35.207878   58676 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1101 01:06:35.208001   58676 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 01:06:35.208102   58676 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 01:06:35.208111   58676 kubeadm.go:322] 
	I1101 01:06:35.208207   58676 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 01:06:35.208314   58676 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1101 01:06:35.208337   58676 kubeadm.go:322] 
	I1101 01:06:35.208459   58676 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ft1765.cra2ecqpjz8r5s0a \
	I1101 01:06:35.208636   58676 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 \
	I1101 01:06:35.208674   58676 kubeadm.go:322] 	--control-plane 
	I1101 01:06:35.208687   58676 kubeadm.go:322] 
	I1101 01:06:35.208812   58676 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1101 01:06:35.208823   58676 kubeadm.go:322] 
	I1101 01:06:35.208936   58676 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ft1765.cra2ecqpjz8r5s0a \
	I1101 01:06:35.209057   58676 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 
	I1101 01:06:35.209758   58676 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 01:06:35.209780   58676 cni.go:84] Creating CNI manager for ""
	I1101 01:06:35.209790   58676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:06:35.211735   58676 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:06:35.213123   58676 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:06:35.235025   58676 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:06:35.271015   58676 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 01:06:35.271092   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9 minikube.k8s.io/name=no-preload-008483 minikube.k8s.io/updated_at=2023_11_01T01_06_35_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:35.271099   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:35.305061   58676 ops.go:34] apiserver oom_adj: -16
	I1101 01:06:35.663339   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:35.805680   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:33.090990   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:33.091030   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:33.091038   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:33.091049   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:33.091060   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:33.091078   58823 retry.go:31] will retry after 2.252641435s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:35.350673   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:35.350703   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:35.350711   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:35.350722   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:35.350735   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:35.350753   58823 retry.go:31] will retry after 2.131984659s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:36.402770   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:36.902353   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:37.402763   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:37.902598   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:38.401883   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:38.902775   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:39.402062   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:39.902544   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:40.402350   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:40.901853   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:37.489100   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:37.489127   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:37.489132   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:37.489141   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:37.489151   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:37.489169   58823 retry.go:31] will retry after 3.273821759s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:40.767389   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:40.767409   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:40.767414   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:40.767421   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:40.767427   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:40.767441   58823 retry.go:31] will retry after 4.351278698s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:41.402632   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:41.901859   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:42.402379   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:42.902816   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:43.402503   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:43.902158   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:44.402562   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:44.901867   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:45.401852   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:45.902865   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:45.124108   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:45.124138   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:45.124147   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:45.124158   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:45.124166   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:45.124184   58823 retry.go:31] will retry after 4.53047058s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:46.402463   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:46.902480   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:47.402022   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:47.568628   58676 kubeadm.go:1081] duration metric: took 12.297606595s to wait for elevateKubeSystemPrivileges.
	I1101 01:06:47.568672   58676 kubeadm.go:406] StartCluster complete in 5m8.570526689s
	I1101 01:06:47.568696   58676 settings.go:142] acquiring lock: {Name:mk7f269e64dfd8d176737f993e01f6e6badafbc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:06:47.568787   58676 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 01:06:47.570839   58676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/kubeconfig: {Name:mk08da65b6c71084e1cfafb19800038e8c8303e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:06:47.571093   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 01:06:47.571207   58676 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1101 01:06:47.571281   58676 addons.go:69] Setting storage-provisioner=true in profile "no-preload-008483"
	I1101 01:06:47.571307   58676 addons.go:69] Setting metrics-server=true in profile "no-preload-008483"
	I1101 01:06:47.571329   58676 addons.go:231] Setting addon metrics-server=true in "no-preload-008483"
	I1101 01:06:47.571345   58676 config.go:182] Loaded profile config "no-preload-008483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:06:47.571360   58676 addons.go:69] Setting default-storageclass=true in profile "no-preload-008483"
	I1101 01:06:47.571369   58676 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-008483"
	W1101 01:06:47.571348   58676 addons.go:240] addon metrics-server should already be in state true
	I1101 01:06:47.571441   58676 host.go:66] Checking if "no-preload-008483" exists ...
	I1101 01:06:47.571312   58676 addons.go:231] Setting addon storage-provisioner=true in "no-preload-008483"
	W1101 01:06:47.571490   58676 addons.go:240] addon storage-provisioner should already be in state true
	I1101 01:06:47.571527   58676 host.go:66] Checking if "no-preload-008483" exists ...
	I1101 01:06:47.571816   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:47.571815   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:47.571873   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:47.571892   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:47.571873   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:47.572006   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:47.590259   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39063
	I1101 01:06:47.590724   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:47.591055   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39635
	I1101 01:06:47.591202   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:06:47.591220   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:47.591229   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46549
	I1101 01:06:47.591621   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:47.591707   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:47.591743   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:47.592428   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:47.592471   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:47.592794   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:06:47.592808   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:06:47.592822   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:47.592826   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:47.593236   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:47.593283   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:47.593437   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetState
	I1101 01:06:47.593927   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:47.593966   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:47.598345   58676 addons.go:231] Setting addon default-storageclass=true in "no-preload-008483"
	W1101 01:06:47.598381   58676 addons.go:240] addon default-storageclass should already be in state true
	I1101 01:06:47.598413   58676 host.go:66] Checking if "no-preload-008483" exists ...
	I1101 01:06:47.598819   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:47.598871   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:47.613965   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43751
	I1101 01:06:47.614004   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40855
	I1101 01:06:47.614542   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:47.614669   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:47.615105   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:06:47.615121   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:47.615151   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:06:47.615189   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:47.615476   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:47.615537   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:47.615690   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetState
	I1101 01:06:47.615767   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetState
	I1101 01:06:47.617847   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:06:47.620144   58676 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:06:47.618264   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45253
	I1101 01:06:47.618444   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:06:47.621319   58676 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-008483" context rescaled to 1 replicas
	I1101 01:06:47.621520   58676 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.140 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 01:06:47.623048   58676 out.go:177] * Verifying Kubernetes components...
	I1101 01:06:47.621641   58676 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:06:47.621894   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:47.625008   58676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 01:06:47.625024   58676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:06:47.626461   58676 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1101 01:06:47.628411   58676 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 01:06:47.628425   58676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 01:06:47.628439   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:06:47.626617   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:06:47.627063   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:06:47.628510   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:47.628907   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:47.629438   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:47.629480   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:47.631968   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:06:47.632175   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:06:47.632212   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:06:47.632315   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:06:47.632508   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:06:47.632679   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:06:47.632739   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:06:47.632795   58676 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa Username:docker}
	I1101 01:06:47.633383   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:06:47.633403   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:06:47.633427   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:06:47.633584   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:06:47.633708   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:06:47.633891   58676 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa Username:docker}
	I1101 01:06:47.650937   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41791
	I1101 01:06:47.651372   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:47.651921   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:06:47.651956   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:47.652322   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:47.652536   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetState
	I1101 01:06:47.654393   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:06:47.654706   58676 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 01:06:47.654721   58676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 01:06:47.654743   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:06:47.657743   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:06:47.658176   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:06:47.658204   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:06:47.658448   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:06:47.658673   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:06:47.658836   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:06:47.659008   58676 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa Username:docker}
	I1101 01:06:47.808648   58676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:06:47.837158   58676 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 01:06:47.837181   58676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1101 01:06:47.846004   58676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 01:06:47.882427   58676 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 01:06:47.882454   58676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 01:06:47.899419   58676 node_ready.go:35] waiting up to 6m0s for node "no-preload-008483" to be "Ready" ...
	I1101 01:06:47.899496   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 01:06:47.919788   58676 node_ready.go:49] node "no-preload-008483" has status "Ready":"True"
	I1101 01:06:47.919821   58676 node_ready.go:38] duration metric: took 20.370648ms waiting for node "no-preload-008483" to be "Ready" ...
	I1101 01:06:47.919836   58676 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:06:47.926205   58676 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:06:47.926232   58676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 01:06:47.930715   58676 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5tp9h" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:47.982413   58676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:06:49.813480   58676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.004790768s)
	I1101 01:06:49.813519   58676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.967476056s)
	I1101 01:06:49.813564   58676 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:49.813588   58676 main.go:141] libmachine: (no-preload-008483) Calling .Close
	I1101 01:06:49.813528   58676 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:49.813617   58676 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.914052615s)
	I1101 01:06:49.813634   58676 main.go:141] libmachine: (no-preload-008483) Calling .Close
	I1101 01:06:49.813643   58676 start.go:926] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1101 01:06:49.813924   58676 main.go:141] libmachine: (no-preload-008483) DBG | Closing plugin on server side
	I1101 01:06:49.813935   58676 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:49.813956   58676 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:49.813970   58676 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:49.813979   58676 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:49.813980   58676 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:49.813990   58676 main.go:141] libmachine: (no-preload-008483) Calling .Close
	I1101 01:06:49.813991   58676 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:49.814014   58676 main.go:141] libmachine: (no-preload-008483) Calling .Close
	I1101 01:06:49.814239   58676 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:49.814258   58676 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:49.814321   58676 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:49.814339   58676 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:49.857721   58676 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:49.857749   58676 main.go:141] libmachine: (no-preload-008483) Calling .Close
	I1101 01:06:49.858034   58676 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:49.858053   58676 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:50.026844   58676 pod_ready.go:97] error getting pod "coredns-5dd5756b68-5tp9h" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-5tp9h" not found
	I1101 01:06:50.026876   58676 pod_ready.go:81] duration metric: took 2.096134316s waiting for pod "coredns-5dd5756b68-5tp9h" in "kube-system" namespace to be "Ready" ...
	E1101 01:06:50.026888   58676 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-5tp9h" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-5tp9h" not found
	I1101 01:06:50.026898   58676 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-m8v7v" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:50.204452   58676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.22199218s)
	I1101 01:06:50.204543   58676 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:50.204561   58676 main.go:141] libmachine: (no-preload-008483) Calling .Close
	I1101 01:06:50.204896   58676 main.go:141] libmachine: (no-preload-008483) DBG | Closing plugin on server side
	I1101 01:06:50.204985   58676 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:50.205017   58676 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:50.205046   58676 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:50.205064   58676 main.go:141] libmachine: (no-preload-008483) Calling .Close
	I1101 01:06:50.205339   58676 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:50.205360   58676 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:50.205371   58676 addons.go:467] Verifying addon metrics-server=true in "no-preload-008483"
	I1101 01:06:50.205393   58676 main.go:141] libmachine: (no-preload-008483) DBG | Closing plugin on server side
	I1101 01:06:50.207552   58676 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1101 01:06:50.208879   58676 addons.go:502] enable addons completed in 2.637673191s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1101 01:06:49.663546   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:49.663578   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:49.663585   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:49.663595   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:49.663604   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:49.663623   58823 retry.go:31] will retry after 5.557220121s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:52.106184   58676 pod_ready.go:92] pod "coredns-5dd5756b68-m8v7v" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:52.106208   58676 pod_ready.go:81] duration metric: took 2.079304042s waiting for pod "coredns-5dd5756b68-m8v7v" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.106218   58676 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.112508   58676 pod_ready.go:92] pod "etcd-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:52.112531   58676 pod_ready.go:81] duration metric: took 6.307404ms waiting for pod "etcd-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.112540   58676 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.119263   58676 pod_ready.go:92] pod "kube-apiserver-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:52.119296   58676 pod_ready.go:81] duration metric: took 6.748553ms waiting for pod "kube-apiserver-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.119311   58676 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.125594   58676 pod_ready.go:92] pod "kube-controller-manager-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:52.125619   58676 pod_ready.go:81] duration metric: took 6.30051ms waiting for pod "kube-controller-manager-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.125629   58676 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4cx5t" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.503777   58676 pod_ready.go:92] pod "kube-proxy-4cx5t" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:52.503802   58676 pod_ready.go:81] duration metric: took 378.166648ms waiting for pod "kube-proxy-4cx5t" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.503811   58676 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.904254   58676 pod_ready.go:92] pod "kube-scheduler-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:52.904275   58676 pod_ready.go:81] duration metric: took 400.457426ms waiting for pod "kube-scheduler-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.904284   58676 pod_ready.go:38] duration metric: took 4.984437509s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:06:52.904303   58676 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:06:52.904352   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:06:52.917549   58676 api_server.go:72] duration metric: took 5.295984843s to wait for apiserver process to appear ...
	I1101 01:06:52.917576   58676 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:06:52.917595   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:06:52.926515   58676 api_server.go:279] https://192.168.50.140:8443/healthz returned 200:
	ok
	I1101 01:06:52.927673   58676 api_server.go:141] control plane version: v1.28.3
	I1101 01:06:52.927692   58676 api_server.go:131] duration metric: took 10.109726ms to wait for apiserver health ...
	I1101 01:06:52.927700   58676 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:06:53.109620   58676 system_pods.go:59] 8 kube-system pods found
	I1101 01:06:53.109648   58676 system_pods.go:61] "coredns-5dd5756b68-m8v7v" [351a9458-075b-40d1-96d1-86a450a99251] Running
	I1101 01:06:53.109653   58676 system_pods.go:61] "etcd-no-preload-008483" [e1db4a59-f5e6-4114-a942-1faf4ff84af2] Running
	I1101 01:06:53.109657   58676 system_pods.go:61] "kube-apiserver-no-preload-008483" [f8f8bb39-3093-44bb-8255-5a7d78437a75] Running
	I1101 01:06:53.109661   58676 system_pods.go:61] "kube-controller-manager-no-preload-008483" [a45df9e4-3399-4c21-981f-3c3caaed52a8] Running
	I1101 01:06:53.109665   58676 system_pods.go:61] "kube-proxy-4cx5t" [57c1e87a-aa14-440d-9001-a6ba2ab7c8c6] Running
	I1101 01:06:53.109670   58676 system_pods.go:61] "kube-scheduler-no-preload-008483" [329b7a2d-6146-4e08-910e-ed4d40f57dcb] Running
	I1101 01:06:53.109676   58676 system_pods.go:61] "metrics-server-57f55c9bc5-qcxt7" [bf444b92-dd54-43fc-a9a8-0e9000b562e3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:53.109684   58676 system_pods.go:61] "storage-provisioner" [909163da-9021-4cee-9a72-1bc9b6ae9390] Running
	I1101 01:06:53.109693   58676 system_pods.go:74] duration metric: took 181.986766ms to wait for pod list to return data ...
	I1101 01:06:53.109706   58676 default_sa.go:34] waiting for default service account to be created ...
	I1101 01:06:53.305872   58676 default_sa.go:45] found service account: "default"
	I1101 01:06:53.305904   58676 default_sa.go:55] duration metric: took 196.187269ms for default service account to be created ...
	I1101 01:06:53.305919   58676 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 01:06:53.506566   58676 system_pods.go:86] 8 kube-system pods found
	I1101 01:06:53.506601   58676 system_pods.go:89] "coredns-5dd5756b68-m8v7v" [351a9458-075b-40d1-96d1-86a450a99251] Running
	I1101 01:06:53.506610   58676 system_pods.go:89] "etcd-no-preload-008483" [e1db4a59-f5e6-4114-a942-1faf4ff84af2] Running
	I1101 01:06:53.506618   58676 system_pods.go:89] "kube-apiserver-no-preload-008483" [f8f8bb39-3093-44bb-8255-5a7d78437a75] Running
	I1101 01:06:53.506625   58676 system_pods.go:89] "kube-controller-manager-no-preload-008483" [a45df9e4-3399-4c21-981f-3c3caaed52a8] Running
	I1101 01:06:53.506631   58676 system_pods.go:89] "kube-proxy-4cx5t" [57c1e87a-aa14-440d-9001-a6ba2ab7c8c6] Running
	I1101 01:06:53.506640   58676 system_pods.go:89] "kube-scheduler-no-preload-008483" [329b7a2d-6146-4e08-910e-ed4d40f57dcb] Running
	I1101 01:06:53.506651   58676 system_pods.go:89] "metrics-server-57f55c9bc5-qcxt7" [bf444b92-dd54-43fc-a9a8-0e9000b562e3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:53.506664   58676 system_pods.go:89] "storage-provisioner" [909163da-9021-4cee-9a72-1bc9b6ae9390] Running
	I1101 01:06:53.506675   58676 system_pods.go:126] duration metric: took 200.749464ms to wait for k8s-apps to be running ...
	I1101 01:06:53.506692   58676 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 01:06:53.506747   58676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:06:53.519471   58676 system_svc.go:56] duration metric: took 12.766173ms WaitForService to wait for kubelet.
	I1101 01:06:53.519502   58676 kubeadm.go:581] duration metric: took 5.897944072s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1101 01:06:53.519525   58676 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:06:53.705460   58676 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:06:53.705490   58676 node_conditions.go:123] node cpu capacity is 2
	I1101 01:06:53.705501   58676 node_conditions.go:105] duration metric: took 185.970851ms to run NodePressure ...
	I1101 01:06:53.705515   58676 start.go:228] waiting for startup goroutines ...
	I1101 01:06:53.705523   58676 start.go:233] waiting for cluster config update ...
	I1101 01:06:53.705537   58676 start.go:242] writing updated cluster config ...
	I1101 01:06:53.705824   58676 ssh_runner.go:195] Run: rm -f paused
	I1101 01:06:53.758508   58676 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1101 01:06:53.761998   58676 out.go:177] * Done! kubectl is now configured to use "no-preload-008483" cluster and "default" namespace by default
	I1101 01:06:55.226416   58823 system_pods.go:86] 5 kube-system pods found
	I1101 01:06:55.226443   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:55.226449   58823 system_pods.go:89] "kube-apiserver-old-k8s-version-330042" [1d813832-7c56-439f-aee9-c5c326e6cd3d] Pending
	I1101 01:06:55.226453   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:55.226460   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:55.226466   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:55.226480   58823 retry.go:31] will retry after 6.901184226s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:07:02.133379   58823 system_pods.go:86] 5 kube-system pods found
	I1101 01:07:02.133412   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:07:02.133421   58823 system_pods.go:89] "kube-apiserver-old-k8s-version-330042" [1d813832-7c56-439f-aee9-c5c326e6cd3d] Running
	I1101 01:07:02.133427   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:07:02.133442   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:07:02.133451   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:07:02.133471   58823 retry.go:31] will retry after 10.272464072s: missing components: etcd, kube-controller-manager, kube-scheduler
	I1101 01:07:12.412133   58823 system_pods.go:86] 5 kube-system pods found
	I1101 01:07:12.412166   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:07:12.412175   58823 system_pods.go:89] "kube-apiserver-old-k8s-version-330042" [1d813832-7c56-439f-aee9-c5c326e6cd3d] Running
	I1101 01:07:12.412181   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:07:12.412193   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:07:12.412202   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:07:12.412221   58823 retry.go:31] will retry after 11.290918588s: missing components: etcd, kube-controller-manager, kube-scheduler
	I1101 01:07:23.709462   58823 system_pods.go:86] 8 kube-system pods found
	I1101 01:07:23.709495   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:07:23.709503   58823 system_pods.go:89] "etcd-old-k8s-version-330042" [fc62fe53-9611-4b3d-9dca-a30d58618b2b] Running
	I1101 01:07:23.709510   58823 system_pods.go:89] "kube-apiserver-old-k8s-version-330042" [1d813832-7c56-439f-aee9-c5c326e6cd3d] Running
	I1101 01:07:23.709517   58823 system_pods.go:89] "kube-controller-manager-old-k8s-version-330042" [8ad0ccf9-fa8e-4205-b89c-f5f57cb7be6e] Running
	I1101 01:07:23.709524   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:07:23.709528   58823 system_pods.go:89] "kube-scheduler-old-k8s-version-330042" [2b077f6b-8077-4ccb-93c2-c6d3383b1113] Pending
	I1101 01:07:23.709534   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:07:23.709543   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:07:23.709559   58823 retry.go:31] will retry after 12.900513481s: missing components: kube-scheduler
	I1101 01:07:36.615720   58823 system_pods.go:86] 8 kube-system pods found
	I1101 01:07:36.615746   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:07:36.615751   58823 system_pods.go:89] "etcd-old-k8s-version-330042" [fc62fe53-9611-4b3d-9dca-a30d58618b2b] Running
	I1101 01:07:36.615756   58823 system_pods.go:89] "kube-apiserver-old-k8s-version-330042" [1d813832-7c56-439f-aee9-c5c326e6cd3d] Running
	I1101 01:07:36.615760   58823 system_pods.go:89] "kube-controller-manager-old-k8s-version-330042" [8ad0ccf9-fa8e-4205-b89c-f5f57cb7be6e] Running
	I1101 01:07:36.615763   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:07:36.615767   58823 system_pods.go:89] "kube-scheduler-old-k8s-version-330042" [2b077f6b-8077-4ccb-93c2-c6d3383b1113] Running
	I1101 01:07:36.615774   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:07:36.615780   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:07:36.615787   58823 system_pods.go:126] duration metric: took 1m10.123228938s to wait for k8s-apps to be running ...
	I1101 01:07:36.615793   58823 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 01:07:36.615837   58823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:07:36.634354   58823 system_svc.go:56] duration metric: took 18.547208ms WaitForService to wait for kubelet.
	I1101 01:07:36.634387   58823 kubeadm.go:581] duration metric: took 1m19.896166299s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1101 01:07:36.634412   58823 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:07:36.638286   58823 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:07:36.638315   58823 node_conditions.go:123] node cpu capacity is 2
	I1101 01:07:36.638329   58823 node_conditions.go:105] duration metric: took 3.911826ms to run NodePressure ...
	I1101 01:07:36.638344   58823 start.go:228] waiting for startup goroutines ...
	I1101 01:07:36.638351   58823 start.go:233] waiting for cluster config update ...
	I1101 01:07:36.638365   58823 start.go:242] writing updated cluster config ...
	I1101 01:07:36.638658   58823 ssh_runner.go:195] Run: rm -f paused
	I1101 01:07:36.688409   58823 start.go:600] kubectl: 1.28.3, cluster: 1.16.0 (minor skew: 12)
	I1101 01:07:36.690520   58823 out.go:177] 
	W1101 01:07:36.692006   58823 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.16.0.
	I1101 01:07:36.693512   58823 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1101 01:07:36.694940   58823 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-330042" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-11-01 01:00:04 UTC, ends at Wed 2023-11-01 01:14:43 UTC. --
	Nov 01 01:14:42 embed-certs-754132 crio[723]: time="2023-11-01 01:14:42.879183857Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698801282879170233,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=9a35e46b-f066-4acb-aefd-9a6f6086be6f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:14:42 embed-certs-754132 crio[723]: time="2023-11-01 01:14:42.879944520Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e79fd14a-e8f0-4531-8ad7-f911b35ff5be name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:14:42 embed-certs-754132 crio[723]: time="2023-11-01 01:14:42.880023184Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e79fd14a-e8f0-4531-8ad7-f911b35ff5be name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:14:42 embed-certs-754132 crio[723]: time="2023-11-01 01:14:42.880181791Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d28340698815c870c266c1c350e03df688140bcf1e135a7004963522db855047,PodSandboxId:d409231bf8be0dd660b5e18385ae9020caa81c6fe3d741e10350cc41ebd2e242,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698800740349558304,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7feb8931-83d0-4968-a295-a4202e8fc8c3,},Annotations:map[string]string{io.kubernetes.container.hash: 27446c8b,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1c58dca73e3cce0160cffb2a2ca266c63aaf632986703e915acd2f8e56f7b77,PodSandboxId:84cf7b9fd7aa639d07509a9df07d14db06e3f176a750a4e49a27ab5fea5978de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698800740234980877,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cwbfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7f5ba1e-bd63-456b-94cc-0e2c121b7792,},Annotations:map[string]string{io.kubernetes.container.hash: aaa212e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a2f7f37e23492d7c891917cbc79897ac18943c36594fedf027550d2f6b006ed,PodSandboxId:e245796915c72f2ae4030a1d8a8cd6db1edb8e02c0481b1f2e1d6d7dc22659f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698800739840337972,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6kqbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e03e6370-35d1-4438-8b18-d62b0a253ea6,},Annotations:map[string]string{io.kubernetes.container.hash: 515528ed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61170a3d73c795f3fe15b7ec6a56f67d0bbde0572c053b74e74ee78d2e13ce96,PodSandboxId:b78c4d5b084d1480831966342f00b5efe25a5e80e2a41cfdeb05a02c460eed3b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698800716261746108,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-754132,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 0a6ee9577f47faf2fcc83cf18cc76050,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:272478e18337c09b38f96f1b3110d25b51954a59ff295ca6699f743b27b0e20d,PodSandboxId:b5d6ed323107ae773936ce033ed465f9ed6cbbaaa2686cf4dc10348c782c761c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698800715830550356,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-754132,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d8dc3bb5d9b817ec64d94b3b634f0ac,},Annotations:
map[string]string{io.kubernetes.container.hash: 79674182,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cae13d3fdeec1e5275a2e5c1d1a9fc6af5ba238cf5ec981846cec6711a32c7ea,PodSandboxId:19bfc042d89ec7a403f3d85c6495c44be359ae22feff45d85c7efa92ff8af12d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698800715683397207,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-754132,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de7577929e1604837b75
088bed2286c,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f00b1e4feeac20c5deb9f138667fe2d11217e3e417d8c54a269693561f3529f6,PodSandboxId:bdb69288d9d92d54c9e97730b1d05f62324d0e95fe9792109e5a3d41d8a46e22,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698800715655681571,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-754132,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc86b9788e9fe6115f54b92ff1ed7d8
7,},Annotations:map[string]string{io.kubernetes.container.hash: d73d1b25,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e79fd14a-e8f0-4531-8ad7-f911b35ff5be name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:14:42 embed-certs-754132 crio[723]: time="2023-11-01 01:14:42.920321400Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=9c74d7d8-4604-4dc1-8c65-e44fe3af5dd4 name=/runtime.v1.RuntimeService/Version
	Nov 01 01:14:42 embed-certs-754132 crio[723]: time="2023-11-01 01:14:42.920390586Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=9c74d7d8-4604-4dc1-8c65-e44fe3af5dd4 name=/runtime.v1.RuntimeService/Version
	Nov 01 01:14:42 embed-certs-754132 crio[723]: time="2023-11-01 01:14:42.921947517Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=371a1b16-0151-4204-b5cd-9a8049215736 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:14:42 embed-certs-754132 crio[723]: time="2023-11-01 01:14:42.922453471Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698801282922436903,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=371a1b16-0151-4204-b5cd-9a8049215736 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:14:42 embed-certs-754132 crio[723]: time="2023-11-01 01:14:42.922964282Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=25fe055a-d8e3-49cb-a2a0-4a104744cdc7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:14:42 embed-certs-754132 crio[723]: time="2023-11-01 01:14:42.923112831Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=25fe055a-d8e3-49cb-a2a0-4a104744cdc7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:14:42 embed-certs-754132 crio[723]: time="2023-11-01 01:14:42.923363489Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d28340698815c870c266c1c350e03df688140bcf1e135a7004963522db855047,PodSandboxId:d409231bf8be0dd660b5e18385ae9020caa81c6fe3d741e10350cc41ebd2e242,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698800740349558304,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7feb8931-83d0-4968-a295-a4202e8fc8c3,},Annotations:map[string]string{io.kubernetes.container.hash: 27446c8b,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1c58dca73e3cce0160cffb2a2ca266c63aaf632986703e915acd2f8e56f7b77,PodSandboxId:84cf7b9fd7aa639d07509a9df07d14db06e3f176a750a4e49a27ab5fea5978de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698800740234980877,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cwbfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7f5ba1e-bd63-456b-94cc-0e2c121b7792,},Annotations:map[string]string{io.kubernetes.container.hash: aaa212e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a2f7f37e23492d7c891917cbc79897ac18943c36594fedf027550d2f6b006ed,PodSandboxId:e245796915c72f2ae4030a1d8a8cd6db1edb8e02c0481b1f2e1d6d7dc22659f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698800739840337972,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6kqbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e03e6370-35d1-4438-8b18-d62b0a253ea6,},Annotations:map[string]string{io.kubernetes.container.hash: 515528ed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61170a3d73c795f3fe15b7ec6a56f67d0bbde0572c053b74e74ee78d2e13ce96,PodSandboxId:b78c4d5b084d1480831966342f00b5efe25a5e80e2a41cfdeb05a02c460eed3b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698800716261746108,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-754132,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 0a6ee9577f47faf2fcc83cf18cc76050,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:272478e18337c09b38f96f1b3110d25b51954a59ff295ca6699f743b27b0e20d,PodSandboxId:b5d6ed323107ae773936ce033ed465f9ed6cbbaaa2686cf4dc10348c782c761c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698800715830550356,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-754132,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d8dc3bb5d9b817ec64d94b3b634f0ac,},Annotations:
map[string]string{io.kubernetes.container.hash: 79674182,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cae13d3fdeec1e5275a2e5c1d1a9fc6af5ba238cf5ec981846cec6711a32c7ea,PodSandboxId:19bfc042d89ec7a403f3d85c6495c44be359ae22feff45d85c7efa92ff8af12d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698800715683397207,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-754132,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de7577929e1604837b75
088bed2286c,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f00b1e4feeac20c5deb9f138667fe2d11217e3e417d8c54a269693561f3529f6,PodSandboxId:bdb69288d9d92d54c9e97730b1d05f62324d0e95fe9792109e5a3d41d8a46e22,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698800715655681571,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-754132,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc86b9788e9fe6115f54b92ff1ed7d8
7,},Annotations:map[string]string{io.kubernetes.container.hash: d73d1b25,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=25fe055a-d8e3-49cb-a2a0-4a104744cdc7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:14:42 embed-certs-754132 crio[723]: time="2023-11-01 01:14:42.967978123Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=7efb04a0-a16e-47f4-af64-8e46960f91f2 name=/runtime.v1.RuntimeService/Version
	Nov 01 01:14:42 embed-certs-754132 crio[723]: time="2023-11-01 01:14:42.968036333Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=7efb04a0-a16e-47f4-af64-8e46960f91f2 name=/runtime.v1.RuntimeService/Version
	Nov 01 01:14:42 embed-certs-754132 crio[723]: time="2023-11-01 01:14:42.969301077Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b9b735b5-ae94-4e6a-a2e2-ba5e14c46478 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:14:42 embed-certs-754132 crio[723]: time="2023-11-01 01:14:42.969832998Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698801282969812822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=b9b735b5-ae94-4e6a-a2e2-ba5e14c46478 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:14:42 embed-certs-754132 crio[723]: time="2023-11-01 01:14:42.971017173Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2a55a78f-ba0f-428a-9f3d-055d28c09686 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:14:42 embed-certs-754132 crio[723]: time="2023-11-01 01:14:42.971096806Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2a55a78f-ba0f-428a-9f3d-055d28c09686 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:14:42 embed-certs-754132 crio[723]: time="2023-11-01 01:14:42.971346272Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d28340698815c870c266c1c350e03df688140bcf1e135a7004963522db855047,PodSandboxId:d409231bf8be0dd660b5e18385ae9020caa81c6fe3d741e10350cc41ebd2e242,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698800740349558304,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7feb8931-83d0-4968-a295-a4202e8fc8c3,},Annotations:map[string]string{io.kubernetes.container.hash: 27446c8b,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1c58dca73e3cce0160cffb2a2ca266c63aaf632986703e915acd2f8e56f7b77,PodSandboxId:84cf7b9fd7aa639d07509a9df07d14db06e3f176a750a4e49a27ab5fea5978de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698800740234980877,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cwbfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7f5ba1e-bd63-456b-94cc-0e2c121b7792,},Annotations:map[string]string{io.kubernetes.container.hash: aaa212e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a2f7f37e23492d7c891917cbc79897ac18943c36594fedf027550d2f6b006ed,PodSandboxId:e245796915c72f2ae4030a1d8a8cd6db1edb8e02c0481b1f2e1d6d7dc22659f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698800739840337972,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6kqbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e03e6370-35d1-4438-8b18-d62b0a253ea6,},Annotations:map[string]string{io.kubernetes.container.hash: 515528ed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61170a3d73c795f3fe15b7ec6a56f67d0bbde0572c053b74e74ee78d2e13ce96,PodSandboxId:b78c4d5b084d1480831966342f00b5efe25a5e80e2a41cfdeb05a02c460eed3b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698800716261746108,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-754132,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 0a6ee9577f47faf2fcc83cf18cc76050,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:272478e18337c09b38f96f1b3110d25b51954a59ff295ca6699f743b27b0e20d,PodSandboxId:b5d6ed323107ae773936ce033ed465f9ed6cbbaaa2686cf4dc10348c782c761c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698800715830550356,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-754132,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d8dc3bb5d9b817ec64d94b3b634f0ac,},Annotations:
map[string]string{io.kubernetes.container.hash: 79674182,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cae13d3fdeec1e5275a2e5c1d1a9fc6af5ba238cf5ec981846cec6711a32c7ea,PodSandboxId:19bfc042d89ec7a403f3d85c6495c44be359ae22feff45d85c7efa92ff8af12d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698800715683397207,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-754132,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de7577929e1604837b75
088bed2286c,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f00b1e4feeac20c5deb9f138667fe2d11217e3e417d8c54a269693561f3529f6,PodSandboxId:bdb69288d9d92d54c9e97730b1d05f62324d0e95fe9792109e5a3d41d8a46e22,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698800715655681571,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-754132,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc86b9788e9fe6115f54b92ff1ed7d8
7,},Annotations:map[string]string{io.kubernetes.container.hash: d73d1b25,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2a55a78f-ba0f-428a-9f3d-055d28c09686 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:14:43 embed-certs-754132 crio[723]: time="2023-11-01 01:14:43.011321179Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=09c56e47-0f8e-4c30-bb39-0803ae429674 name=/runtime.v1.RuntimeService/Version
	Nov 01 01:14:43 embed-certs-754132 crio[723]: time="2023-11-01 01:14:43.011422299Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=09c56e47-0f8e-4c30-bb39-0803ae429674 name=/runtime.v1.RuntimeService/Version
	Nov 01 01:14:43 embed-certs-754132 crio[723]: time="2023-11-01 01:14:43.012727738Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b030055b-054e-448e-96aa-0d890702ea0f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:14:43 embed-certs-754132 crio[723]: time="2023-11-01 01:14:43.013872468Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698801283013850959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=b030055b-054e-448e-96aa-0d890702ea0f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:14:43 embed-certs-754132 crio[723]: time="2023-11-01 01:14:43.014689673Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3f060035-6921-4f41-a014-e9f1da89720a name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:14:43 embed-certs-754132 crio[723]: time="2023-11-01 01:14:43.014768662Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3f060035-6921-4f41-a014-e9f1da89720a name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:14:43 embed-certs-754132 crio[723]: time="2023-11-01 01:14:43.014930134Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d28340698815c870c266c1c350e03df688140bcf1e135a7004963522db855047,PodSandboxId:d409231bf8be0dd660b5e18385ae9020caa81c6fe3d741e10350cc41ebd2e242,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698800740349558304,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7feb8931-83d0-4968-a295-a4202e8fc8c3,},Annotations:map[string]string{io.kubernetes.container.hash: 27446c8b,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1c58dca73e3cce0160cffb2a2ca266c63aaf632986703e915acd2f8e56f7b77,PodSandboxId:84cf7b9fd7aa639d07509a9df07d14db06e3f176a750a4e49a27ab5fea5978de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698800740234980877,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cwbfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7f5ba1e-bd63-456b-94cc-0e2c121b7792,},Annotations:map[string]string{io.kubernetes.container.hash: aaa212e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a2f7f37e23492d7c891917cbc79897ac18943c36594fedf027550d2f6b006ed,PodSandboxId:e245796915c72f2ae4030a1d8a8cd6db1edb8e02c0481b1f2e1d6d7dc22659f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698800739840337972,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6kqbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e03e6370-35d1-4438-8b18-d62b0a253ea6,},Annotations:map[string]string{io.kubernetes.container.hash: 515528ed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61170a3d73c795f3fe15b7ec6a56f67d0bbde0572c053b74e74ee78d2e13ce96,PodSandboxId:b78c4d5b084d1480831966342f00b5efe25a5e80e2a41cfdeb05a02c460eed3b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698800716261746108,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-754132,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 0a6ee9577f47faf2fcc83cf18cc76050,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:272478e18337c09b38f96f1b3110d25b51954a59ff295ca6699f743b27b0e20d,PodSandboxId:b5d6ed323107ae773936ce033ed465f9ed6cbbaaa2686cf4dc10348c782c761c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698800715830550356,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-754132,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d8dc3bb5d9b817ec64d94b3b634f0ac,},Annotations:
map[string]string{io.kubernetes.container.hash: 79674182,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cae13d3fdeec1e5275a2e5c1d1a9fc6af5ba238cf5ec981846cec6711a32c7ea,PodSandboxId:19bfc042d89ec7a403f3d85c6495c44be359ae22feff45d85c7efa92ff8af12d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698800715683397207,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-754132,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de7577929e1604837b75
088bed2286c,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f00b1e4feeac20c5deb9f138667fe2d11217e3e417d8c54a269693561f3529f6,PodSandboxId:bdb69288d9d92d54c9e97730b1d05f62324d0e95fe9792109e5a3d41d8a46e22,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698800715655681571,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-754132,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc86b9788e9fe6115f54b92ff1ed7d8
7,},Annotations:map[string]string{io.kubernetes.container.hash: d73d1b25,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3f060035-6921-4f41-a014-e9f1da89720a name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d28340698815c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   d409231bf8be0       storage-provisioner
	a1c58dca73e3c       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   9 minutes ago       Running             kube-proxy                0                   84cf7b9fd7aa6       kube-proxy-cwbfz
	6a2f7f37e2349       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   e245796915c72       coredns-5dd5756b68-6kqbc
	61170a3d73c79       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   9 minutes ago       Running             kube-scheduler            2                   b78c4d5b084d1       kube-scheduler-embed-certs-754132
	272478e18337c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   b5d6ed323107a       etcd-embed-certs-754132
	cae13d3fdeec1       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   9 minutes ago       Running             kube-controller-manager   2                   19bfc042d89ec       kube-controller-manager-embed-certs-754132
	f00b1e4feeac2       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   9 minutes ago       Running             kube-apiserver            2                   bdb69288d9d92       kube-apiserver-embed-certs-754132
	
	* 
	* ==> coredns [6a2f7f37e23492d7c891917cbc79897ac18943c36594fedf027550d2f6b006ed] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-754132
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-754132
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9
	                    minikube.k8s.io/name=embed-certs-754132
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_01T01_05_23_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Nov 2023 01:05:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-754132
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Nov 2023 01:14:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Nov 2023 01:10:50 +0000   Wed, 01 Nov 2023 01:05:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Nov 2023 01:10:50 +0000   Wed, 01 Nov 2023 01:05:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Nov 2023 01:10:50 +0000   Wed, 01 Nov 2023 01:05:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Nov 2023 01:10:50 +0000   Wed, 01 Nov 2023 01:05:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.83
	  Hostname:    embed-certs-754132
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 b3f56a2074104288a4dc0065652f0242
	  System UUID:                b3f56a20-7410-4288-a4dc-0065652f0242
	  Boot ID:                    70f715ef-1758-4a32-8563-70324dc16d05
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-6kqbc                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-embed-certs-754132                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-apiserver-embed-certs-754132             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-controller-manager-embed-certs-754132    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-proxy-cwbfz                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-embed-certs-754132             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 metrics-server-57f55c9bc5-499xs               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m2s   kube-proxy       
	  Normal  Starting                 9m20s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m20s  kubelet          Node embed-certs-754132 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s  kubelet          Node embed-certs-754132 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s  kubelet          Node embed-certs-754132 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m20s  kubelet          Node embed-certs-754132 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m20s  kubelet          Node embed-certs-754132 status is now: NodeReady
	  Normal  RegisteredNode           9m8s   node-controller  Node embed-certs-754132 event: Registered Node embed-certs-754132 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov 1 00:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.065147] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Nov 1 01:00] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.759416] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.136094] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.391126] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.783508] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.110039] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.156651] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.124192] systemd-fstab-generator[684]: Ignoring "noauto" for root device
	[  +0.242489] systemd-fstab-generator[708]: Ignoring "noauto" for root device
	[ +17.096705] systemd-fstab-generator[921]: Ignoring "noauto" for root device
	[ +22.496796] kauditd_printk_skb: 34 callbacks suppressed
	[  +3.134284] hrtimer: interrupt took 2781045 ns
	[Nov 1 01:05] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.099390] systemd-fstab-generator[3663]: Ignoring "noauto" for root device
	[  +9.298397] systemd-fstab-generator[3988]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [272478e18337c09b38f96f1b3110d25b51954a59ff295ca6699f743b27b0e20d] <==
	* {"level":"info","ts":"2023-11-01T01:05:17.289962Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1706423cc6d0face switched to configuration voters=(1659086341533661902)"}
	{"level":"info","ts":"2023-11-01T01:05:17.292346Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bef7c63622dde9b5","local-member-id":"1706423cc6d0face","added-peer-id":"1706423cc6d0face","added-peer-peer-urls":["https://192.168.61.83:2380"]}
	{"level":"info","ts":"2023-11-01T01:05:17.294204Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-01T01:05:17.294239Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.83:2380"}
	{"level":"info","ts":"2023-11-01T01:05:17.297615Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.83:2380"}
	{"level":"info","ts":"2023-11-01T01:05:17.304596Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"1706423cc6d0face","initial-advertise-peer-urls":["https://192.168.61.83:2380"],"listen-peer-urls":["https://192.168.61.83:2380"],"advertise-client-urls":["https://192.168.61.83:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.83:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-01T01:05:17.30536Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-01T01:05:17.828221Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1706423cc6d0face is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-01T01:05:17.828451Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1706423cc6d0face became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-01T01:05:17.828489Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1706423cc6d0face received MsgPreVoteResp from 1706423cc6d0face at term 1"}
	{"level":"info","ts":"2023-11-01T01:05:17.828561Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1706423cc6d0face became candidate at term 2"}
	{"level":"info","ts":"2023-11-01T01:05:17.828576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1706423cc6d0face received MsgVoteResp from 1706423cc6d0face at term 2"}
	{"level":"info","ts":"2023-11-01T01:05:17.828589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1706423cc6d0face became leader at term 2"}
	{"level":"info","ts":"2023-11-01T01:05:17.828596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1706423cc6d0face elected leader 1706423cc6d0face at term 2"}
	{"level":"info","ts":"2023-11-01T01:05:17.830335Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T01:05:17.831077Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"1706423cc6d0face","local-member-attributes":"{Name:embed-certs-754132 ClientURLs:[https://192.168.61.83:2379]}","request-path":"/0/members/1706423cc6d0face/attributes","cluster-id":"bef7c63622dde9b5","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-01T01:05:17.831617Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-01T01:05:17.834558Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-01T01:05:17.835Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-01T01:05:17.837331Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-01T01:05:17.837375Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-01T01:05:17.83764Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.83:2379"}
	{"level":"info","ts":"2023-11-01T01:05:17.839553Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bef7c63622dde9b5","local-member-id":"1706423cc6d0face","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T01:05:17.839666Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T01:05:17.839717Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  01:14:43 up 14 min,  0 users,  load average: 0.39, 0.50, 0.32
	Linux embed-certs-754132 5.10.57 #1 SMP Tue Oct 31 22:14:31 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [f00b1e4feeac20c5deb9f138667fe2d11217e3e417d8c54a269693561f3529f6] <==
	* W1101 01:10:20.744008       1 handler_proxy.go:93] no RequestInfo found in the context
	E1101 01:10:20.744127       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1101 01:10:20.744159       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1101 01:10:20.744070       1 handler_proxy.go:93] no RequestInfo found in the context
	E1101 01:10:20.744334       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1101 01:10:20.745623       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1101 01:11:19.650569       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1101 01:11:20.744472       1 handler_proxy.go:93] no RequestInfo found in the context
	E1101 01:11:20.744639       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1101 01:11:20.744677       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1101 01:11:20.746821       1 handler_proxy.go:93] no RequestInfo found in the context
	E1101 01:11:20.746936       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1101 01:11:20.746949       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1101 01:12:19.649991       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1101 01:13:19.650011       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1101 01:13:20.744905       1 handler_proxy.go:93] no RequestInfo found in the context
	E1101 01:13:20.744988       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1101 01:13:20.745002       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1101 01:13:20.747125       1 handler_proxy.go:93] no RequestInfo found in the context
	E1101 01:13:20.747374       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1101 01:13:20.747426       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1101 01:14:19.649974       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [cae13d3fdeec1e5275a2e5c1d1a9fc6af5ba238cf5ec981846cec6711a32c7ea] <==
	* I1101 01:09:06.336038       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:09:35.928489       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:09:36.348793       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:10:05.935724       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:10:06.365503       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:10:35.942743       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:10:36.376637       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:11:05.948330       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:11:06.386574       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1101 01:11:32.307463       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="387.563µs"
	E1101 01:11:35.954779       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:11:36.394888       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1101 01:11:47.301670       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="151.499µs"
	E1101 01:12:05.960539       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:12:06.408423       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:12:35.967412       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:12:36.418304       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:13:05.974693       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:13:06.426230       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:13:35.981899       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:13:36.436735       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:14:05.987716       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:14:06.445034       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:14:35.994595       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:14:36.454179       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [a1c58dca73e3cce0160cffb2a2ca266c63aaf632986703e915acd2f8e56f7b77] <==
	* I1101 01:05:40.727585       1 server_others.go:69] "Using iptables proxy"
	I1101 01:05:40.747590       1 node.go:141] Successfully retrieved node IP: 192.168.61.83
	I1101 01:05:40.804123       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1101 01:05:40.804166       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 01:05:40.806853       1 server_others.go:152] "Using iptables Proxier"
	I1101 01:05:40.807165       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 01:05:40.807593       1 server.go:846] "Version info" version="v1.28.3"
	I1101 01:05:40.807627       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 01:05:40.812889       1 config.go:188] "Starting service config controller"
	I1101 01:05:40.812968       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 01:05:40.813039       1 config.go:315] "Starting node config controller"
	I1101 01:05:40.813047       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 01:05:40.814651       1 config.go:97] "Starting endpoint slice config controller"
	I1101 01:05:40.814777       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 01:05:40.913203       1 shared_informer.go:318] Caches are synced for node config
	I1101 01:05:40.913203       1 shared_informer.go:318] Caches are synced for service config
	I1101 01:05:40.915505       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [61170a3d73c795f3fe15b7ec6a56f67d0bbde0572c053b74e74ee78d2e13ce96] <==
	* E1101 01:05:19.860890       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1101 01:05:19.860936       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1101 01:05:20.683176       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1101 01:05:20.683227       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1101 01:05:20.753653       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1101 01:05:20.753739       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1101 01:05:20.800588       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1101 01:05:20.800711       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1101 01:05:20.809454       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1101 01:05:20.809567       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1101 01:05:20.938074       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1101 01:05:20.938178       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1101 01:05:20.957589       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1101 01:05:20.957696       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1101 01:05:20.981604       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1101 01:05:20.981650       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 01:05:21.022807       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1101 01:05:21.022859       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1101 01:05:21.072791       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1101 01:05:21.072842       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1101 01:05:21.078846       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1101 01:05:21.078907       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1101 01:05:21.134980       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1101 01:05:21.135095       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I1101 01:05:23.938532       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-11-01 01:00:04 UTC, ends at Wed 2023-11-01 01:14:43 UTC. --
	Nov 01 01:11:58 embed-certs-754132 kubelet[3995]: E1101 01:11:58.281184    3995 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-499xs" podUID="617aecda-f132-4358-9da9-bbc4fc625da0"
	Nov 01 01:12:10 embed-certs-754132 kubelet[3995]: E1101 01:12:10.281968    3995 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-499xs" podUID="617aecda-f132-4358-9da9-bbc4fc625da0"
	Nov 01 01:12:23 embed-certs-754132 kubelet[3995]: E1101 01:12:23.282065    3995 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-499xs" podUID="617aecda-f132-4358-9da9-bbc4fc625da0"
	Nov 01 01:12:23 embed-certs-754132 kubelet[3995]: E1101 01:12:23.324026    3995 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 01 01:12:23 embed-certs-754132 kubelet[3995]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 01 01:12:23 embed-certs-754132 kubelet[3995]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 01 01:12:23 embed-certs-754132 kubelet[3995]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 01 01:12:36 embed-certs-754132 kubelet[3995]: E1101 01:12:36.280808    3995 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-499xs" podUID="617aecda-f132-4358-9da9-bbc4fc625da0"
	Nov 01 01:12:49 embed-certs-754132 kubelet[3995]: E1101 01:12:49.280986    3995 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-499xs" podUID="617aecda-f132-4358-9da9-bbc4fc625da0"
	Nov 01 01:13:03 embed-certs-754132 kubelet[3995]: E1101 01:13:03.281391    3995 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-499xs" podUID="617aecda-f132-4358-9da9-bbc4fc625da0"
	Nov 01 01:13:15 embed-certs-754132 kubelet[3995]: E1101 01:13:15.282702    3995 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-499xs" podUID="617aecda-f132-4358-9da9-bbc4fc625da0"
	Nov 01 01:13:23 embed-certs-754132 kubelet[3995]: E1101 01:13:23.322680    3995 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 01 01:13:23 embed-certs-754132 kubelet[3995]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 01 01:13:23 embed-certs-754132 kubelet[3995]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 01 01:13:23 embed-certs-754132 kubelet[3995]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 01 01:13:29 embed-certs-754132 kubelet[3995]: E1101 01:13:29.283178    3995 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-499xs" podUID="617aecda-f132-4358-9da9-bbc4fc625da0"
	Nov 01 01:13:43 embed-certs-754132 kubelet[3995]: E1101 01:13:43.281225    3995 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-499xs" podUID="617aecda-f132-4358-9da9-bbc4fc625da0"
	Nov 01 01:13:58 embed-certs-754132 kubelet[3995]: E1101 01:13:58.281418    3995 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-499xs" podUID="617aecda-f132-4358-9da9-bbc4fc625da0"
	Nov 01 01:14:09 embed-certs-754132 kubelet[3995]: E1101 01:14:09.281115    3995 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-499xs" podUID="617aecda-f132-4358-9da9-bbc4fc625da0"
	Nov 01 01:14:20 embed-certs-754132 kubelet[3995]: E1101 01:14:20.280997    3995 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-499xs" podUID="617aecda-f132-4358-9da9-bbc4fc625da0"
	Nov 01 01:14:23 embed-certs-754132 kubelet[3995]: E1101 01:14:23.323338    3995 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 01 01:14:23 embed-certs-754132 kubelet[3995]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 01 01:14:23 embed-certs-754132 kubelet[3995]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 01 01:14:23 embed-certs-754132 kubelet[3995]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 01 01:14:34 embed-certs-754132 kubelet[3995]: E1101 01:14:34.281331    3995 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-499xs" podUID="617aecda-f132-4358-9da9-bbc4fc625da0"
	
	* 
	* ==> storage-provisioner [d28340698815c870c266c1c350e03df688140bcf1e135a7004963522db855047] <==
	* I1101 01:05:40.553405       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 01:05:40.578831       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 01:05:40.578918       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1101 01:05:40.610239       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 01:05:40.611325       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-754132_4d090253-2492-4940-a7bc-e0a5c210e6de!
	I1101 01:05:40.612944       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"efcc0b08-4bc1-4dca-a43c-aa319d18bea1", APIVersion:"v1", ResourceVersion:"423", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-754132_4d090253-2492-4940-a7bc-e0a5c210e6de became leader
	I1101 01:05:40.712599       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-754132_4d090253-2492-4940-a7bc-e0a5c210e6de!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-754132 -n embed-certs-754132
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-754132 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-499xs
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-754132 describe pod metrics-server-57f55c9bc5-499xs
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-754132 describe pod metrics-server-57f55c9bc5-499xs: exit status 1 (74.972185ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-499xs" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-754132 describe pod metrics-server-57f55c9bc5-499xs: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-639310 -n default-k8s-diff-port-639310
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-11-01 01:15:24.442692676 +0000 UTC m=+5498.007274680
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-639310 -n default-k8s-diff-port-639310
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-639310 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-639310 logs -n 25: (1.636482039s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p flannel-090856 sudo                                 | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | containerd config dump                                 |                              |         |                |                     |                     |
	| ssh     | -p flannel-090856 sudo                                 | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | systemctl status crio --all                            |                              |         |                |                     |                     |
	|         | --full --no-pager                                      |                              |         |                |                     |                     |
	| ssh     | -p flannel-090856 sudo                                 | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |                |                     |                     |
	| start   | -p embed-certs-754132                                  | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:52 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| ssh     | -p flannel-090856 sudo find                            | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |                |                     |                     |
	| ssh     | -p flannel-090856 sudo crio                            | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | config                                                 |                              |         |                |                     |                     |
	| delete  | -p flannel-090856                                      | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	| delete  | -p                                                     | disable-driver-mounts-130996 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | disable-driver-mounts-130996                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:53 UTC |
	|         | default-k8s-diff-port-639310                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-008483             | no-preload-008483            | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC | 01 Nov 23 00:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-008483                                   | no-preload-008483            | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-754132            | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC | 01 Nov 23 00:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-754132                                  | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-330042        | old-k8s-version-330042       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC | 01 Nov 23 00:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-330042                              | old-k8s-version-330042       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-639310  | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:53 UTC | 01 Nov 23 00:53 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:53 UTC |                     |
	|         | default-k8s-diff-port-639310                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-008483                  | no-preload-008483            | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-754132                 | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-008483                                   | no-preload-008483            | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC | 01 Nov 23 01:06 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| start   | -p embed-certs-754132                                  | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC | 01 Nov 23 01:05 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-330042             | old-k8s-version-330042       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-330042                              | old-k8s-version-330042       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC | 01 Nov 23 01:07 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-639310       | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:56 UTC | 01 Nov 23 01:06 UTC |
	|         | default-k8s-diff-port-639310                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/01 00:56:25
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 00:56:25.029853   59148 out.go:296] Setting OutFile to fd 1 ...
	I1101 00:56:25.030119   59148 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:56:25.030128   59148 out.go:309] Setting ErrFile to fd 2...
	I1101 00:56:25.030133   59148 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:56:25.030311   59148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7305/.minikube/bin
	I1101 00:56:25.030856   59148 out.go:303] Setting JSON to false
	I1101 00:56:25.031741   59148 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5930,"bootTime":1698794255,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 00:56:25.031805   59148 start.go:138] virtualization: kvm guest
	I1101 00:56:25.034341   59148 out.go:177] * [default-k8s-diff-port-639310] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1101 00:56:25.036261   59148 out.go:177]   - MINIKUBE_LOCATION=17486
	I1101 00:56:25.037829   59148 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 00:56:25.036294   59148 notify.go:220] Checking for updates...
	I1101 00:56:25.041068   59148 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 00:56:25.042691   59148 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7305/.minikube
	I1101 00:56:25.044204   59148 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 00:56:25.045719   59148 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 00:56:25.047781   59148 config.go:182] Loaded profile config "default-k8s-diff-port-639310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:56:25.048183   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 00:56:25.048245   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:56:25.062714   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34345
	I1101 00:56:25.063108   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:56:25.063662   59148 main.go:141] libmachine: Using API Version  1
	I1101 00:56:25.063682   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:56:25.064083   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:56:25.064302   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 00:56:25.064571   59148 driver.go:378] Setting default libvirt URI to qemu:///system
	I1101 00:56:25.064917   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 00:56:25.064958   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:56:25.079214   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46451
	I1101 00:56:25.079576   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:56:25.080090   59148 main.go:141] libmachine: Using API Version  1
	I1101 00:56:25.080115   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:56:25.080419   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:56:25.080616   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 00:56:25.119015   59148 out.go:177] * Using the kvm2 driver based on existing profile
	I1101 00:56:25.120650   59148 start.go:298] selected driver: kvm2
	I1101 00:56:25.120670   59148 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-639310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-639310 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.97 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:56:25.120819   59148 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 00:56:25.121515   59148 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:56:25.121580   59148 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17486-7305/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1101 00:56:25.137482   59148 install.go:137] /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1101 00:56:25.137885   59148 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 00:56:25.137962   59148 cni.go:84] Creating CNI manager for ""
	I1101 00:56:25.137976   59148 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 00:56:25.137988   59148 start_flags.go:323] config:
	{Name:default-k8s-diff-port-639310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-63931
0 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.97 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:56:25.138186   59148 iso.go:125] acquiring lock: {Name:mk1f649ca0b7c1ae293cd66cb85f9eeda028b20b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:56:25.140405   59148 out.go:177] * Starting control plane node default-k8s-diff-port-639310 in cluster default-k8s-diff-port-639310
	I1101 00:56:25.141855   59148 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 00:56:25.141918   59148 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1101 00:56:25.141935   59148 cache.go:56] Caching tarball of preloaded images
	I1101 00:56:25.142048   59148 preload.go:174] Found /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 00:56:25.142066   59148 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1101 00:56:25.142204   59148 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/config.json ...
	I1101 00:56:25.142449   59148 start.go:365] acquiring machines lock for default-k8s-diff-port-639310: {Name:mk7aad88408c319111b9be8e59d9593a9e88374b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 00:56:26.060176   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:56:29.132322   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:56:35.212221   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:56:38.284225   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:56:44.364219   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:56:47.436224   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:56:53.516201   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:56:56.588256   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:02.668213   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:05.740252   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:11.820242   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:14.892259   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:20.972213   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:24.044181   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:30.124291   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:33.196239   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:39.276183   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:42.348235   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:48.428230   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:51.500275   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:57.580250   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:00.652208   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:06.732207   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:09.804251   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:15.884265   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:18.956206   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:25.040217   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:28.108288   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:34.188238   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:37.260268   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:43.340210   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:46.412248   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:52.492221   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:55.564188   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:01.644193   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:04.716194   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:10.796265   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:13.868226   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:19.948219   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:23.020283   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:29.100251   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:32.172268   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:38.252219   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:41.324223   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:47.404323   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:50.476273   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:53.480339   58730 start.go:369] acquired machines lock for "embed-certs-754132" in 4m35.118425724s
	I1101 00:59:53.480387   58730 start.go:96] Skipping create...Using existing machine configuration
	I1101 00:59:53.480393   58730 fix.go:54] fixHost starting: 
	I1101 00:59:53.480707   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 00:59:53.480737   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:59:53.495582   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34891
	I1101 00:59:53.495998   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:59:53.496445   58730 main.go:141] libmachine: Using API Version  1
	I1101 00:59:53.496466   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:59:53.496844   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:59:53.497017   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 00:59:53.497171   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetState
	I1101 00:59:53.498937   58730 fix.go:102] recreateIfNeeded on embed-certs-754132: state=Stopped err=<nil>
	I1101 00:59:53.498956   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	W1101 00:59:53.499128   58730 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 00:59:53.500909   58730 out.go:177] * Restarting existing kvm2 VM for "embed-certs-754132" ...
	I1101 00:59:53.478140   58676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 00:59:53.478177   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 00:59:53.480187   58676 machine.go:91] provisioned docker machine in 4m37.408348367s
	I1101 00:59:53.480232   58676 fix.go:56] fixHost completed within 4m37.430154401s
	I1101 00:59:53.480241   58676 start.go:83] releasing machines lock for "no-preload-008483", held for 4m37.430178737s
	W1101 00:59:53.480270   58676 start.go:691] error starting host: provision: host is not running
	W1101 00:59:53.480361   58676 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1101 00:59:53.480371   58676 start.go:706] Will try again in 5 seconds ...
	I1101 00:59:53.502467   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Start
	I1101 00:59:53.502656   58730 main.go:141] libmachine: (embed-certs-754132) Ensuring networks are active...
	I1101 00:59:53.503633   58730 main.go:141] libmachine: (embed-certs-754132) Ensuring network default is active
	I1101 00:59:53.504036   58730 main.go:141] libmachine: (embed-certs-754132) Ensuring network mk-embed-certs-754132 is active
	I1101 00:59:53.504557   58730 main.go:141] libmachine: (embed-certs-754132) Getting domain xml...
	I1101 00:59:53.505302   58730 main.go:141] libmachine: (embed-certs-754132) Creating domain...
	I1101 00:59:54.749625   58730 main.go:141] libmachine: (embed-certs-754132) Waiting to get IP...
	I1101 00:59:54.750551   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:54.750924   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:54.751002   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:54.750917   59675 retry.go:31] will retry after 295.652358ms: waiting for machine to come up
	I1101 00:59:55.048450   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:55.048884   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:55.048910   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:55.048845   59675 retry.go:31] will retry after 335.376353ms: waiting for machine to come up
	I1101 00:59:55.385612   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:55.385959   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:55.386000   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:55.385952   59675 retry.go:31] will retry after 353.381783ms: waiting for machine to come up
	I1101 00:59:55.740456   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:55.740943   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:55.740979   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:55.740874   59675 retry.go:31] will retry after 417.863733ms: waiting for machine to come up
	I1101 00:59:56.160773   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:56.161271   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:56.161298   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:56.161236   59675 retry.go:31] will retry after 659.454883ms: waiting for machine to come up
	I1101 00:59:56.822139   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:56.822551   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:56.822573   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:56.822511   59675 retry.go:31] will retry after 627.06089ms: waiting for machine to come up
	I1101 00:59:57.451254   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:57.451659   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:57.451687   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:57.451624   59675 retry.go:31] will retry after 1.095096876s: waiting for machine to come up
	I1101 00:59:58.481145   58676 start.go:365] acquiring machines lock for no-preload-008483: {Name:mk7aad88408c319111b9be8e59d9593a9e88374b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 00:59:58.548870   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:58.549359   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:58.549410   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:58.549323   59675 retry.go:31] will retry after 1.133377858s: waiting for machine to come up
	I1101 00:59:59.684741   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:59.685182   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:59.685205   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:59.685149   59675 retry.go:31] will retry after 1.332824718s: waiting for machine to come up
	I1101 01:00:01.019662   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:01.020166   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 01:00:01.020217   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 01:00:01.020119   59675 retry.go:31] will retry after 1.62664347s: waiting for machine to come up
	I1101 01:00:02.649017   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:02.649459   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 01:00:02.649490   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 01:00:02.649404   59675 retry.go:31] will retry after 2.043788133s: waiting for machine to come up
	I1101 01:00:04.695225   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:04.695657   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 01:00:04.695711   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 01:00:04.695640   59675 retry.go:31] will retry after 2.435347975s: waiting for machine to come up
	I1101 01:00:07.133078   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:07.133531   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 01:00:07.133567   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 01:00:07.133492   59675 retry.go:31] will retry after 2.768108097s: waiting for machine to come up
	I1101 01:00:09.903094   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:09.903460   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 01:00:09.903484   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 01:00:09.903424   59675 retry.go:31] will retry after 3.955575113s: waiting for machine to come up
	I1101 01:00:15.240546   58823 start.go:369] acquired machines lock for "old-k8s-version-330042" in 4m47.663537715s
	I1101 01:00:15.240608   58823 start.go:96] Skipping create...Using existing machine configuration
	I1101 01:00:15.240616   58823 fix.go:54] fixHost starting: 
	I1101 01:00:15.241087   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:00:15.241135   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:00:15.260921   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45157
	I1101 01:00:15.261342   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:00:15.261921   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:00:15.261954   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:00:15.262285   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:00:15.262488   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:00:15.262657   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetState
	I1101 01:00:15.264332   58823 fix.go:102] recreateIfNeeded on old-k8s-version-330042: state=Stopped err=<nil>
	I1101 01:00:15.264357   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	W1101 01:00:15.264541   58823 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 01:00:15.266960   58823 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-330042" ...
	I1101 01:00:13.860184   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.860818   58730 main.go:141] libmachine: (embed-certs-754132) Found IP for machine: 192.168.61.83
	I1101 01:00:13.860849   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has current primary IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.860866   58730 main.go:141] libmachine: (embed-certs-754132) Reserving static IP address...
	I1101 01:00:13.861321   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "embed-certs-754132", mac: "52:54:00:5e:2f:dd", ip: "192.168.61.83"} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:13.861350   58730 main.go:141] libmachine: (embed-certs-754132) Reserved static IP address: 192.168.61.83
	I1101 01:00:13.861362   58730 main.go:141] libmachine: (embed-certs-754132) DBG | skip adding static IP to network mk-embed-certs-754132 - found existing host DHCP lease matching {name: "embed-certs-754132", mac: "52:54:00:5e:2f:dd", ip: "192.168.61.83"}
	I1101 01:00:13.861372   58730 main.go:141] libmachine: (embed-certs-754132) Waiting for SSH to be available...
	I1101 01:00:13.861384   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Getting to WaitForSSH function...
	I1101 01:00:13.864760   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.865204   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:13.865232   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.865368   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Using SSH client type: external
	I1101 01:00:13.865408   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa (-rw-------)
	I1101 01:00:13.865434   58730 main.go:141] libmachine: (embed-certs-754132) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.83 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1101 01:00:13.865446   58730 main.go:141] libmachine: (embed-certs-754132) DBG | About to run SSH command:
	I1101 01:00:13.865454   58730 main.go:141] libmachine: (embed-certs-754132) DBG | exit 0
	I1101 01:00:13.964103   58730 main.go:141] libmachine: (embed-certs-754132) DBG | SSH cmd err, output: <nil>: 
	I1101 01:00:13.964444   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetConfigRaw
	I1101 01:00:13.965066   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetIP
	I1101 01:00:13.967463   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.967768   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:13.967791   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.968100   58730 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/config.json ...
	I1101 01:00:13.968294   58730 machine.go:88] provisioning docker machine ...
	I1101 01:00:13.968312   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:00:13.968530   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetMachineName
	I1101 01:00:13.968707   58730 buildroot.go:166] provisioning hostname "embed-certs-754132"
	I1101 01:00:13.968728   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetMachineName
	I1101 01:00:13.968901   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:13.971288   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.971637   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:13.971676   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.971792   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:13.972000   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:13.972181   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:13.972312   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:13.972476   58730 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:13.972798   58730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I1101 01:00:13.972812   58730 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-754132 && echo "embed-certs-754132" | sudo tee /etc/hostname
	I1101 01:00:14.121000   58730 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-754132
	
	I1101 01:00:14.121036   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:14.124379   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.124813   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:14.124840   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.125085   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:14.125339   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:14.125667   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:14.125832   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:14.126091   58730 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:14.126401   58730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I1101 01:00:14.126418   58730 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-754132' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-754132/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-754132' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 01:00:14.268155   58730 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 01:00:14.268188   58730 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7305/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7305/.minikube}
	I1101 01:00:14.268210   58730 buildroot.go:174] setting up certificates
	I1101 01:00:14.268238   58730 provision.go:83] configureAuth start
	I1101 01:00:14.268255   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetMachineName
	I1101 01:00:14.268542   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetIP
	I1101 01:00:14.271516   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.271946   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:14.271984   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.272150   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:14.274610   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.275017   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:14.275054   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.275206   58730 provision.go:138] copyHostCerts
	I1101 01:00:14.275269   58730 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem, removing ...
	I1101 01:00:14.275282   58730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 01:00:14.275351   58730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem (1078 bytes)
	I1101 01:00:14.275442   58730 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem, removing ...
	I1101 01:00:14.275450   58730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 01:00:14.275475   58730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem (1123 bytes)
	I1101 01:00:14.275526   58730 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem, removing ...
	I1101 01:00:14.275533   58730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 01:00:14.275571   58730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem (1675 bytes)
	I1101 01:00:14.275616   58730 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem org=jenkins.embed-certs-754132 san=[192.168.61.83 192.168.61.83 localhost 127.0.0.1 minikube embed-certs-754132]
	I1101 01:00:14.494175   58730 provision.go:172] copyRemoteCerts
	I1101 01:00:14.494239   58730 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 01:00:14.494265   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:14.496921   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.497263   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:14.497310   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.497482   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:14.497748   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:14.497906   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:14.498052   58730 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa Username:docker}
	I1101 01:00:14.592739   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 01:00:14.614862   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1101 01:00:14.636483   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1101 01:00:14.658154   58730 provision.go:86] duration metric: configureAuth took 389.900669ms
	I1101 01:00:14.658179   58730 buildroot.go:189] setting minikube options for container-runtime
	I1101 01:00:14.658364   58730 config.go:182] Loaded profile config "embed-certs-754132": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:00:14.658478   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:14.661110   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.661450   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:14.661500   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.661667   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:14.661853   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:14.661997   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:14.662120   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:14.662279   58730 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:14.662573   58730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I1101 01:00:14.662589   58730 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 01:00:14.974481   58730 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 01:00:14.974505   58730 machine.go:91] provisioned docker machine in 1.006198078s
	I1101 01:00:14.974521   58730 start.go:300] post-start starting for "embed-certs-754132" (driver="kvm2")
	I1101 01:00:14.974534   58730 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 01:00:14.974556   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:00:14.974913   58730 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 01:00:14.974946   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:14.977485   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.977815   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:14.977846   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.977970   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:14.978146   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:14.978310   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:14.978470   58730 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa Username:docker}
	I1101 01:00:15.073889   58730 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 01:00:15.077710   58730 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 01:00:15.077734   58730 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/addons for local assets ...
	I1101 01:00:15.077791   58730 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/files for local assets ...
	I1101 01:00:15.077855   58730 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> 145042.pem in /etc/ssl/certs
	I1101 01:00:15.077961   58730 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 01:00:15.086567   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:00:15.107446   58730 start.go:303] post-start completed in 132.911351ms
	I1101 01:00:15.107468   58730 fix.go:56] fixHost completed within 21.627074953s
	I1101 01:00:15.107485   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:15.110070   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.110392   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:15.110426   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.110552   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:15.110748   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:15.110914   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:15.111078   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:15.111268   58730 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:15.111683   58730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I1101 01:00:15.111696   58730 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1101 01:00:15.240326   58730 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698800415.188118531
	
	I1101 01:00:15.240357   58730 fix.go:206] guest clock: 1698800415.188118531
	I1101 01:00:15.240365   58730 fix.go:219] Guest: 2023-11-01 01:00:15.188118531 +0000 UTC Remote: 2023-11-01 01:00:15.107470988 +0000 UTC m=+296.909935143 (delta=80.647543ms)
	I1101 01:00:15.240385   58730 fix.go:190] guest clock delta is within tolerance: 80.647543ms
	I1101 01:00:15.240420   58730 start.go:83] releasing machines lock for "embed-certs-754132", held for 21.760022516s
	I1101 01:00:15.240464   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:00:15.240736   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetIP
	I1101 01:00:15.243570   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.243905   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:15.243961   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.244163   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:00:15.244698   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:00:15.244872   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:00:15.244948   58730 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 01:00:15.245012   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:15.245063   58730 ssh_runner.go:195] Run: cat /version.json
	I1101 01:00:15.245089   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:15.247618   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.247886   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.247985   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:15.248018   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.248265   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:15.248358   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:15.248387   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.248422   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:15.248600   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:15.248601   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:15.248774   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:15.248765   58730 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa Username:docker}
	I1101 01:00:15.248913   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:15.249034   58730 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa Username:docker}
	I1101 01:00:15.383514   58730 ssh_runner.go:195] Run: systemctl --version
	I1101 01:00:15.389291   58730 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 01:00:15.531982   58730 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 01:00:15.537622   58730 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 01:00:15.537711   58730 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 01:00:15.554440   58730 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 01:00:15.554488   58730 start.go:472] detecting cgroup driver to use...
	I1101 01:00:15.554549   58730 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 01:00:15.569732   58730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 01:00:15.582752   58730 docker.go:204] disabling cri-docker service (if available) ...
	I1101 01:00:15.582795   58730 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 01:00:15.596221   58730 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 01:00:15.609815   58730 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 01:00:15.717679   58730 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 01:00:15.842128   58730 docker.go:220] disabling docker service ...
	I1101 01:00:15.842203   58730 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 01:00:15.854613   58730 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 01:00:15.869487   58730 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 01:00:15.991107   58730 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 01:00:16.118392   58730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 01:00:16.131570   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 01:00:16.150691   58730 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 01:00:16.150755   58730 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:16.160081   58730 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 01:00:16.160171   58730 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:16.170277   58730 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:16.180469   58730 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:16.189966   58730 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 01:00:16.199465   58730 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 01:00:16.207995   58730 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 01:00:16.208057   58730 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 01:00:16.221491   58730 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 01:00:16.231855   58730 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 01:00:16.355227   58730 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 01:00:16.520341   58730 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 01:00:16.520403   58730 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 01:00:16.525071   58730 start.go:540] Will wait 60s for crictl version
	I1101 01:00:16.525143   58730 ssh_runner.go:195] Run: which crictl
	I1101 01:00:16.529138   58730 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 01:00:16.566007   58730 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1101 01:00:16.566082   58730 ssh_runner.go:195] Run: crio --version
	I1101 01:00:16.612652   58730 ssh_runner.go:195] Run: crio --version
	I1101 01:00:16.665668   58730 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1101 01:00:15.268389   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Start
	I1101 01:00:15.268575   58823 main.go:141] libmachine: (old-k8s-version-330042) Ensuring networks are active...
	I1101 01:00:15.269280   58823 main.go:141] libmachine: (old-k8s-version-330042) Ensuring network default is active
	I1101 01:00:15.269618   58823 main.go:141] libmachine: (old-k8s-version-330042) Ensuring network mk-old-k8s-version-330042 is active
	I1101 01:00:15.270056   58823 main.go:141] libmachine: (old-k8s-version-330042) Getting domain xml...
	I1101 01:00:15.270814   58823 main.go:141] libmachine: (old-k8s-version-330042) Creating domain...
	I1101 01:00:16.566526   58823 main.go:141] libmachine: (old-k8s-version-330042) Waiting to get IP...
	I1101 01:00:16.567713   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:16.568239   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:16.568336   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:16.568220   59797 retry.go:31] will retry after 200.046919ms: waiting for machine to come up
	I1101 01:00:16.769849   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:16.770436   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:16.770477   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:16.770427   59797 retry.go:31] will retry after 301.397937ms: waiting for machine to come up
	I1101 01:00:17.074180   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:17.074657   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:17.074689   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:17.074626   59797 retry.go:31] will retry after 462.511505ms: waiting for machine to come up
	I1101 01:00:16.667657   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetIP
	I1101 01:00:16.670756   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:16.671148   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:16.671216   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:16.671377   58730 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1101 01:00:16.675342   58730 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:00:16.687224   58730 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 01:00:16.687310   58730 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:00:16.726714   58730 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1101 01:00:16.726779   58730 ssh_runner.go:195] Run: which lz4
	I1101 01:00:16.730745   58730 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1101 01:00:16.734588   58730 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 01:00:16.734623   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1101 01:00:17.538840   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:17.539313   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:17.539337   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:17.539276   59797 retry.go:31] will retry after 562.894181ms: waiting for machine to come up
	I1101 01:00:18.104173   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:18.104678   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:18.104712   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:18.104641   59797 retry.go:31] will retry after 659.582768ms: waiting for machine to come up
	I1101 01:00:18.766319   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:18.766719   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:18.766749   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:18.766688   59797 retry.go:31] will retry after 626.783168ms: waiting for machine to come up
	I1101 01:00:19.395203   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:19.395693   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:19.395720   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:19.395651   59797 retry.go:31] will retry after 884.294618ms: waiting for machine to come up
	I1101 01:00:20.281677   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:20.282152   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:20.282176   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:20.282094   59797 retry.go:31] will retry after 997.794459ms: waiting for machine to come up
	I1101 01:00:21.281118   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:21.281568   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:21.281596   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:21.281525   59797 retry.go:31] will retry after 1.624252325s: waiting for machine to come up
	I1101 01:00:18.514400   58730 crio.go:444] Took 1.783693 seconds to copy over tarball
	I1101 01:00:18.514460   58730 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 01:00:21.481089   58730 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.966600648s)
	I1101 01:00:21.481118   58730 crio.go:451] Took 2.966695 seconds to extract the tarball
	I1101 01:00:21.481130   58730 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 01:00:21.520934   58730 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:00:21.568541   58730 crio.go:496] all images are preloaded for cri-o runtime.
	I1101 01:00:21.568569   58730 cache_images.go:84] Images are preloaded, skipping loading
	I1101 01:00:21.568638   58730 ssh_runner.go:195] Run: crio config
	I1101 01:00:21.626687   58730 cni.go:84] Creating CNI manager for ""
	I1101 01:00:21.626707   58730 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:00:21.626724   58730 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 01:00:21.626745   58730 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.83 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-754132 NodeName:embed-certs-754132 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.83"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.83 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 01:00:21.626906   58730 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.83
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-754132"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.83
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.83"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 01:00:21.627000   58730 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-754132 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.83
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:embed-certs-754132 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 01:00:21.627062   58730 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 01:00:21.635965   58730 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 01:00:21.636048   58730 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 01:00:21.644318   58730 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1101 01:00:21.659722   58730 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 01:00:21.674541   58730 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1101 01:00:21.690451   58730 ssh_runner.go:195] Run: grep 192.168.61.83	control-plane.minikube.internal$ /etc/hosts
	I1101 01:00:21.694013   58730 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.83	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:00:21.705929   58730 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132 for IP: 192.168.61.83
	I1101 01:00:21.705978   58730 certs.go:190] acquiring lock for shared ca certs: {Name:mk6185b3938c4b51e80f57b7c81926c81b632b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:00:21.706152   58730 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key
	I1101 01:00:21.706193   58730 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key
	I1101 01:00:21.706255   58730 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/client.key
	I1101 01:00:21.706321   58730 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/apiserver.key.00ce3257
	I1101 01:00:21.706365   58730 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/proxy-client.key
	I1101 01:00:21.706507   58730 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem (1338 bytes)
	W1101 01:00:21.706541   58730 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504_empty.pem, impossibly tiny 0 bytes
	I1101 01:00:21.706552   58730 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 01:00:21.706580   58730 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem (1078 bytes)
	I1101 01:00:21.706606   58730 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem (1123 bytes)
	I1101 01:00:21.706633   58730 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem (1675 bytes)
	I1101 01:00:21.706670   58730 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:00:21.707263   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 01:00:21.734199   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 01:00:21.760230   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 01:00:21.787083   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 01:00:21.810498   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 01:00:21.833905   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 01:00:21.859073   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 01:00:21.881222   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 01:00:21.904432   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem --> /usr/share/ca-certificates/14504.pem (1338 bytes)
	I1101 01:00:21.934873   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /usr/share/ca-certificates/145042.pem (1708 bytes)
	I1101 01:00:21.958353   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 01:00:21.981353   58730 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 01:00:21.997436   58730 ssh_runner.go:195] Run: openssl version
	I1101 01:00:22.003487   58730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14504.pem && ln -fs /usr/share/ca-certificates/14504.pem /etc/ssl/certs/14504.pem"
	I1101 01:00:22.013829   58730 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14504.pem
	I1101 01:00:22.018482   58730 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:54 /usr/share/ca-certificates/14504.pem
	I1101 01:00:22.018554   58730 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14504.pem
	I1101 01:00:22.024695   58730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14504.pem /etc/ssl/certs/51391683.0"
	I1101 01:00:22.034956   58730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145042.pem && ln -fs /usr/share/ca-certificates/145042.pem /etc/ssl/certs/145042.pem"
	I1101 01:00:22.046182   58730 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145042.pem
	I1101 01:00:22.051197   58730 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:54 /usr/share/ca-certificates/145042.pem
	I1101 01:00:22.051273   58730 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145042.pem
	I1101 01:00:22.057145   58730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145042.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 01:00:22.067337   58730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 01:00:22.077300   58730 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:00:22.081973   58730 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:00:22.082025   58730 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:00:22.087341   58730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 01:00:22.097021   58730 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 01:00:22.101801   58730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 01:00:22.107498   58730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 01:00:22.113187   58730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 01:00:22.119281   58730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 01:00:22.125109   58730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 01:00:22.130878   58730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 01:00:22.136711   58730 kubeadm.go:404] StartCluster: {Name:embed-certs-754132 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:embed-certs-754132 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.83 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 01:00:22.136843   58730 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 01:00:22.136898   58730 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:00:22.172188   58730 cri.go:89] found id: ""
	I1101 01:00:22.172267   58730 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 01:00:22.181863   58730 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1101 01:00:22.181901   58730 kubeadm.go:636] restartCluster start
	I1101 01:00:22.181962   58730 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 01:00:22.190970   58730 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:22.192108   58730 kubeconfig.go:92] found "embed-certs-754132" server: "https://192.168.61.83:8443"
	I1101 01:00:22.194633   58730 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 01:00:22.203708   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:22.203792   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:22.214867   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:22.214889   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:22.214972   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:22.225940   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:22.726677   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:22.726769   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:22.737874   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:23.226416   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:23.226492   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:23.237902   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:22.907053   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:22.907532   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:22.907563   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:22.907487   59797 retry.go:31] will retry after 2.170221456s: waiting for machine to come up
	I1101 01:00:25.079354   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:25.079791   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:25.079831   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:25.079754   59797 retry.go:31] will retry after 2.279141994s: waiting for machine to come up
	I1101 01:00:27.361955   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:27.362423   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:27.362456   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:27.362368   59797 retry.go:31] will retry after 2.772425742s: waiting for machine to come up
	I1101 01:00:23.726108   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:23.726179   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:23.737404   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:24.226007   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:24.226178   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:24.237401   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:24.727058   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:24.727152   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:24.742704   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:25.226166   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:25.226272   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:25.237808   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:25.726161   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:25.726244   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:25.737763   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:26.226321   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:26.226485   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:26.239919   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:26.726488   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:26.726596   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:26.740719   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:27.226157   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:27.226268   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:27.240719   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:27.726272   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:27.726360   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:27.738068   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:28.226882   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:28.226954   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:28.239208   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:30.136893   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:30.137311   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:30.137333   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:30.137274   59797 retry.go:31] will retry after 4.191062934s: waiting for machine to come up
	I1101 01:00:28.726726   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:28.726845   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:28.737955   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:29.226410   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:29.226475   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:29.237886   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:29.726367   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:29.726461   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:29.737767   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:30.226294   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:30.226389   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:30.237767   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:30.726295   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:30.726363   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:30.737691   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:31.226274   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:31.226343   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:31.237801   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:31.726297   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:31.726366   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:31.738060   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:32.204696   58730 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1101 01:00:32.204729   58730 kubeadm.go:1128] stopping kube-system containers ...
	I1101 01:00:32.204741   58730 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 01:00:32.204792   58730 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:00:32.241943   58730 cri.go:89] found id: ""
	I1101 01:00:32.242012   58730 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 01:00:32.256657   58730 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:00:32.265087   58730 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:00:32.265159   58730 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:00:32.273631   58730 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1101 01:00:32.273654   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:32.379073   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:35.634014   59148 start.go:369] acquired machines lock for "default-k8s-diff-port-639310" in 4m10.491521982s
	I1101 01:00:35.634070   59148 start.go:96] Skipping create...Using existing machine configuration
	I1101 01:00:35.634078   59148 fix.go:54] fixHost starting: 
	I1101 01:00:35.634533   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:00:35.634577   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:00:35.654259   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46439
	I1101 01:00:35.654746   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:00:35.655216   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:00:35.655245   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:00:35.655578   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:00:35.655759   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:00:35.655905   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetState
	I1101 01:00:35.657604   59148 fix.go:102] recreateIfNeeded on default-k8s-diff-port-639310: state=Stopped err=<nil>
	I1101 01:00:35.657646   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	W1101 01:00:35.657804   59148 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 01:00:35.660028   59148 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-639310" ...
	I1101 01:00:34.332963   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.333486   58823 main.go:141] libmachine: (old-k8s-version-330042) Found IP for machine: 192.168.39.90
	I1101 01:00:34.333518   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has current primary IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.333529   58823 main.go:141] libmachine: (old-k8s-version-330042) Reserving static IP address...
	I1101 01:00:34.333853   58823 main.go:141] libmachine: (old-k8s-version-330042) Reserved static IP address: 192.168.39.90
	I1101 01:00:34.333874   58823 main.go:141] libmachine: (old-k8s-version-330042) Waiting for SSH to be available...
	I1101 01:00:34.333901   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "old-k8s-version-330042", mac: "52:54:00:a2:40:80", ip: "192.168.39.90"} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.333932   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | skip adding static IP to network mk-old-k8s-version-330042 - found existing host DHCP lease matching {name: "old-k8s-version-330042", mac: "52:54:00:a2:40:80", ip: "192.168.39.90"}
	I1101 01:00:34.333954   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Getting to WaitForSSH function...
	I1101 01:00:34.335871   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.336238   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.336275   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.336409   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Using SSH client type: external
	I1101 01:00:34.336446   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa (-rw-------)
	I1101 01:00:34.336480   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.90 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1101 01:00:34.336501   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | About to run SSH command:
	I1101 01:00:34.336523   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | exit 0
	I1101 01:00:34.431938   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | SSH cmd err, output: <nil>: 
	I1101 01:00:34.432324   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetConfigRaw
	I1101 01:00:34.433070   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetIP
	I1101 01:00:34.435967   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.436402   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.436434   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.436696   58823 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/config.json ...
	I1101 01:00:34.436886   58823 machine.go:88] provisioning docker machine ...
	I1101 01:00:34.436903   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:00:34.437136   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetMachineName
	I1101 01:00:34.437299   58823 buildroot.go:166] provisioning hostname "old-k8s-version-330042"
	I1101 01:00:34.437323   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetMachineName
	I1101 01:00:34.437508   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:34.439785   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.440175   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.440215   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.440316   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:34.440481   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:34.440662   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:34.440800   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:34.440965   58823 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:34.441440   58823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1101 01:00:34.441461   58823 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-330042 && echo "old-k8s-version-330042" | sudo tee /etc/hostname
	I1101 01:00:34.590132   58823 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-330042
	
	I1101 01:00:34.590168   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:34.593018   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.593457   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.593521   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.593623   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:34.593817   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:34.594004   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:34.594151   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:34.594317   58823 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:34.594622   58823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1101 01:00:34.594640   58823 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-330042' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-330042/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-330042' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 01:00:34.743448   58823 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 01:00:34.743485   58823 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7305/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7305/.minikube}
	I1101 01:00:34.743510   58823 buildroot.go:174] setting up certificates
	I1101 01:00:34.743530   58823 provision.go:83] configureAuth start
	I1101 01:00:34.743545   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetMachineName
	I1101 01:00:34.743848   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetIP
	I1101 01:00:34.746932   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.747302   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.747333   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.747478   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:34.749794   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.750154   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.750185   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.750339   58823 provision.go:138] copyHostCerts
	I1101 01:00:34.750412   58823 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem, removing ...
	I1101 01:00:34.750435   58823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 01:00:34.750504   58823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem (1078 bytes)
	I1101 01:00:34.750620   58823 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem, removing ...
	I1101 01:00:34.750628   58823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 01:00:34.750655   58823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem (1123 bytes)
	I1101 01:00:34.750726   58823 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem, removing ...
	I1101 01:00:34.750736   58823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 01:00:34.750761   58823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem (1675 bytes)
	I1101 01:00:34.750820   58823 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-330042 san=[192.168.39.90 192.168.39.90 localhost 127.0.0.1 minikube old-k8s-version-330042]
	I1101 01:00:34.819269   58823 provision.go:172] copyRemoteCerts
	I1101 01:00:34.819327   58823 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 01:00:34.819354   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:34.822409   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.822852   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.822887   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.823101   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:34.823335   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:34.823520   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:34.823688   58823 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa Username:docker}
	I1101 01:00:34.928534   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 01:00:34.955140   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1101 01:00:34.982361   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 01:00:35.007980   58823 provision.go:86] duration metric: configureAuth took 264.432358ms
	I1101 01:00:35.008007   58823 buildroot.go:189] setting minikube options for container-runtime
	I1101 01:00:35.008317   58823 config.go:182] Loaded profile config "old-k8s-version-330042": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1101 01:00:35.008450   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:35.011424   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.011790   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:35.011820   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.012054   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:35.012305   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:35.012505   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:35.012692   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:35.012898   58823 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:35.013292   58823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1101 01:00:35.013310   58823 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 01:00:35.345179   58823 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 01:00:35.345210   58823 machine.go:91] provisioned docker machine in 908.310008ms
	I1101 01:00:35.345224   58823 start.go:300] post-start starting for "old-k8s-version-330042" (driver="kvm2")
	I1101 01:00:35.345236   58823 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 01:00:35.345283   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:00:35.345634   58823 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 01:00:35.345666   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:35.348576   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.348945   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:35.348978   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.349171   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:35.349364   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:35.349527   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:35.349672   58823 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa Username:docker}
	I1101 01:00:35.448239   58823 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 01:00:35.453459   58823 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 01:00:35.453495   58823 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/addons for local assets ...
	I1101 01:00:35.453589   58823 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/files for local assets ...
	I1101 01:00:35.453705   58823 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> 145042.pem in /etc/ssl/certs
	I1101 01:00:35.453819   58823 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 01:00:35.464658   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:00:35.488669   58823 start.go:303] post-start completed in 143.429717ms
	I1101 01:00:35.488699   58823 fix.go:56] fixHost completed within 20.248082329s
	I1101 01:00:35.488723   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:35.491535   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.491917   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:35.491962   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.492108   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:35.492302   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:35.492472   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:35.492610   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:35.492777   58823 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:35.493085   58823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1101 01:00:35.493097   58823 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1101 01:00:35.633831   58823 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698800435.580601462
	
	I1101 01:00:35.633860   58823 fix.go:206] guest clock: 1698800435.580601462
	I1101 01:00:35.633872   58823 fix.go:219] Guest: 2023-11-01 01:00:35.580601462 +0000 UTC Remote: 2023-11-01 01:00:35.488703086 +0000 UTC m=+308.076532844 (delta=91.898376ms)
	I1101 01:00:35.633899   58823 fix.go:190] guest clock delta is within tolerance: 91.898376ms
	I1101 01:00:35.633906   58823 start.go:83] releasing machines lock for "old-k8s-version-330042", held for 20.393324923s
	I1101 01:00:35.633937   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:00:35.634276   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetIP
	I1101 01:00:35.637052   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.637411   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:35.637462   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.637668   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:00:35.638239   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:00:35.638479   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:00:35.638661   58823 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 01:00:35.638703   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:35.638792   58823 ssh_runner.go:195] Run: cat /version.json
	I1101 01:00:35.638813   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:35.641913   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:35.641919   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.642071   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:35.642094   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.642106   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.642151   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:35.642323   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:35.642517   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:35.642547   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.642608   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:35.642640   58823 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa Username:docker}
	I1101 01:00:35.642736   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:35.642872   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:35.642994   58823 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa Username:docker}
	I1101 01:00:35.772469   58823 ssh_runner.go:195] Run: systemctl --version
	I1101 01:00:35.778377   58823 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 01:00:35.930189   58823 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 01:00:35.937481   58823 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 01:00:35.937583   58823 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 01:00:35.959054   58823 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 01:00:35.959081   58823 start.go:472] detecting cgroup driver to use...
	I1101 01:00:35.959166   58823 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 01:00:35.978338   58823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 01:00:35.994627   58823 docker.go:204] disabling cri-docker service (if available) ...
	I1101 01:00:35.994690   58823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 01:00:36.010212   58823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 01:00:36.025616   58823 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 01:00:36.132484   58823 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 01:00:36.266531   58823 docker.go:220] disabling docker service ...
	I1101 01:00:36.266604   58823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 01:00:36.280303   58823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 01:00:36.291905   58823 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 01:00:36.413114   58823 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 01:00:36.527297   58823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 01:00:36.540547   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 01:00:36.561997   58823 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1101 01:00:36.562070   58823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:36.574735   58823 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 01:00:36.574809   58823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:36.584015   58823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:36.592896   58823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:36.602199   58823 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 01:00:36.611742   58823 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 01:00:36.620073   58823 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 01:00:36.620140   58823 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 01:00:36.633237   58823 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 01:00:36.641679   58823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 01:00:36.786323   58823 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 01:00:37.011240   58823 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 01:00:37.011332   58823 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 01:00:37.016349   58823 start.go:540] Will wait 60s for crictl version
	I1101 01:00:37.016417   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:37.020952   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 01:00:37.068566   58823 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1101 01:00:37.068649   58823 ssh_runner.go:195] Run: crio --version
	I1101 01:00:37.119257   58823 ssh_runner.go:195] Run: crio --version
	I1101 01:00:37.170471   58823 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1101 01:00:37.172128   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetIP
	I1101 01:00:37.175116   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:37.175552   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:37.175583   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:37.175834   58823 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1101 01:00:37.179970   58823 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:00:37.193466   58823 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1101 01:00:37.193550   58823 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:00:37.239780   58823 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1101 01:00:37.239851   58823 ssh_runner.go:195] Run: which lz4
	I1101 01:00:37.243871   58823 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1101 01:00:37.248203   58823 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 01:00:37.248243   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1101 01:00:33.273385   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:33.468847   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:33.558663   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:33.632226   58730 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:00:33.632305   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:33.645291   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:34.159920   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:34.660339   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:35.159837   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:35.659362   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:36.159870   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:36.189698   58730 api_server.go:72] duration metric: took 2.557471176s to wait for apiserver process to appear ...
	I1101 01:00:36.189726   58730 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:00:36.189746   58730 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8443/healthz ...
	I1101 01:00:35.662001   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Start
	I1101 01:00:35.662248   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Ensuring networks are active...
	I1101 01:00:35.663075   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Ensuring network default is active
	I1101 01:00:35.663589   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Ensuring network mk-default-k8s-diff-port-639310 is active
	I1101 01:00:35.664066   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Getting domain xml...
	I1101 01:00:35.664780   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Creating domain...
	I1101 01:00:37.046385   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting to get IP...
	I1101 01:00:37.047592   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.048056   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.048160   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:37.048064   59967 retry.go:31] will retry after 244.19131ms: waiting for machine to come up
	I1101 01:00:37.293636   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.294421   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.294535   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:37.294483   59967 retry.go:31] will retry after 281.302105ms: waiting for machine to come up
	I1101 01:00:37.577271   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.577934   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.577962   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:37.577874   59967 retry.go:31] will retry after 376.713113ms: waiting for machine to come up
	I1101 01:00:37.956666   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.957154   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.957182   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:37.957125   59967 retry.go:31] will retry after 366.92844ms: waiting for machine to come up
	I1101 01:00:38.325741   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:38.326257   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:38.326291   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:38.326226   59967 retry.go:31] will retry after 478.435824ms: waiting for machine to come up
	I1101 01:00:38.806215   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:38.806928   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:38.806965   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:38.806904   59967 retry.go:31] will retry after 910.120665ms: waiting for machine to come up
	I1101 01:00:39.718641   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:39.719281   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:39.719307   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:39.719210   59967 retry.go:31] will retry after 1.017844602s: waiting for machine to come up
	I1101 01:00:40.636542   58730 api_server.go:279] https://192.168.61.83:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 01:00:40.636586   58730 api_server.go:103] status: https://192.168.61.83:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 01:00:40.636602   58730 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8443/healthz ...
	I1101 01:00:40.687211   58730 api_server.go:279] https://192.168.61.83:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 01:00:40.687258   58730 api_server.go:103] status: https://192.168.61.83:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 01:00:41.187988   58730 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8443/healthz ...
	I1101 01:00:41.197585   58730 api_server.go:279] https://192.168.61.83:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:00:41.197626   58730 api_server.go:103] status: https://192.168.61.83:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:00:41.688019   58730 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8443/healthz ...
	I1101 01:00:41.698406   58730 api_server.go:279] https://192.168.61.83:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:00:41.698439   58730 api_server.go:103] status: https://192.168.61.83:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:00:42.188141   58730 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8443/healthz ...
	I1101 01:00:42.195663   58730 api_server.go:279] https://192.168.61.83:8443/healthz returned 200:
	ok
	I1101 01:00:42.204715   58730 api_server.go:141] control plane version: v1.28.3
	I1101 01:00:42.204746   58730 api_server.go:131] duration metric: took 6.015012484s to wait for apiserver health ...
	I1101 01:00:42.204756   58730 cni.go:84] Creating CNI manager for ""
	I1101 01:00:42.204764   58730 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:00:42.206831   58730 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:00:38.979032   58823 crio.go:444] Took 1.735199 seconds to copy over tarball
	I1101 01:00:38.979127   58823 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 01:00:42.235526   58823 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.256363592s)
	I1101 01:00:42.235558   58823 crio.go:451] Took 3.256498 seconds to extract the tarball
	I1101 01:00:42.235592   58823 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 01:00:42.278508   58823 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:00:42.332199   58823 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1101 01:00:42.332225   58823 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1101 01:00:42.332323   58823 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:00:42.332383   58823 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1101 01:00:42.332425   58823 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1101 01:00:42.332445   58823 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1101 01:00:42.332394   58823 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1101 01:00:42.332554   58823 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1101 01:00:42.332552   58823 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1101 01:00:42.332611   58823 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1101 01:00:42.333952   58823 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1101 01:00:42.333965   58823 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1101 01:00:42.333971   58823 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1101 01:00:42.333973   58823 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:00:42.333951   58823 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1101 01:00:42.333959   58823 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1101 01:00:42.334015   58823 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1101 01:00:42.334422   58823 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1101 01:00:42.208425   58730 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:00:42.243672   58730 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:00:42.270472   58730 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:00:40.739283   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:40.739839   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:40.739871   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:40.739751   59967 retry.go:31] will retry after 924.830892ms: waiting for machine to come up
	I1101 01:00:41.666231   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:41.666922   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:41.666949   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:41.666878   59967 retry.go:31] will retry after 1.792434708s: waiting for machine to come up
	I1101 01:00:43.461158   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:43.461723   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:43.461758   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:43.461651   59967 retry.go:31] will retry after 1.458280506s: waiting for machine to come up
	I1101 01:00:44.921321   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:44.922072   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:44.922105   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:44.922018   59967 retry.go:31] will retry after 2.732488928s: waiting for machine to come up
	I1101 01:00:42.548949   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1101 01:00:42.549011   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1101 01:00:42.552787   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1101 01:00:42.554125   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1101 01:00:42.559301   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1101 01:00:42.560733   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1101 01:00:42.564609   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1101 01:00:42.857456   58823 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1101 01:00:42.857497   58823 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1101 01:00:42.857537   58823 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1101 01:00:42.857565   58823 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1101 01:00:42.857583   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:42.857502   58823 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1101 01:00:42.857597   58823 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1101 01:00:42.857644   58823 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1101 01:00:42.857703   58823 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1101 01:00:42.857733   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:42.857663   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:42.857666   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:42.880301   58823 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1101 01:00:42.880350   58823 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1101 01:00:42.880362   58823 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1101 01:00:42.880404   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:42.880421   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1101 01:00:42.880432   58823 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1101 01:00:42.880473   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:42.880475   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1101 01:00:42.880542   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1101 01:00:42.880377   58823 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1101 01:00:42.880587   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1101 01:00:42.880610   58823 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1101 01:00:42.880663   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:42.958449   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1101 01:00:42.975151   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1101 01:00:42.975188   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1101 01:00:42.979136   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1101 01:00:42.979198   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1101 01:00:42.979246   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1101 01:00:42.979306   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1101 01:00:43.059447   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1101 01:00:43.059470   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1101 01:00:43.059515   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1101 01:00:43.059572   58823 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1101 01:00:43.065313   58823 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1101 01:00:43.065337   58823 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1101 01:00:43.065397   58823 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1101 01:00:43.212775   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:00:44.821509   58823 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.756075689s)
	I1101 01:00:44.821542   58823 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1101 01:00:44.821600   58823 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.608800531s)
	I1101 01:00:44.821639   58823 cache_images.go:92] LoadImages completed in 2.489401317s
	W1101 01:00:44.821749   58823 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0: no such file or directory
	I1101 01:00:44.821888   58823 ssh_runner.go:195] Run: crio config
	I1101 01:00:44.911017   58823 cni.go:84] Creating CNI manager for ""
	I1101 01:00:44.911094   58823 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:00:44.911132   58823 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 01:00:44.911173   58823 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.90 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-330042 NodeName:old-k8s-version-330042 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1101 01:00:44.911365   58823 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-330042"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.90
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.90"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-330042
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.90:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 01:00:44.911510   58823 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-330042 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-330042 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 01:00:44.911601   58823 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1101 01:00:44.925733   58823 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 01:00:44.925810   58823 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 01:00:44.939166   58823 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1101 01:00:44.962847   58823 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 01:00:44.986855   58823 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I1101 01:00:45.011998   58823 ssh_runner.go:195] Run: grep 192.168.39.90	control-plane.minikube.internal$ /etc/hosts
	I1101 01:00:45.017160   58823 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.90	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:00:45.035826   58823 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042 for IP: 192.168.39.90
	I1101 01:00:45.035866   58823 certs.go:190] acquiring lock for shared ca certs: {Name:mk6185b3938c4b51e80f57b7c81926c81b632b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:00:45.036097   58823 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key
	I1101 01:00:45.036161   58823 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key
	I1101 01:00:45.036276   58823 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/client.key
	I1101 01:00:45.036363   58823 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/apiserver.key.05a13cdc
	I1101 01:00:45.036423   58823 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/proxy-client.key
	I1101 01:00:45.036600   58823 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem (1338 bytes)
	W1101 01:00:45.036642   58823 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504_empty.pem, impossibly tiny 0 bytes
	I1101 01:00:45.036657   58823 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 01:00:45.036697   58823 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem (1078 bytes)
	I1101 01:00:45.036734   58823 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem (1123 bytes)
	I1101 01:00:45.036769   58823 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem (1675 bytes)
	I1101 01:00:45.036837   58823 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:00:45.037808   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 01:00:45.071828   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 01:00:45.105069   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 01:00:45.136650   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 01:00:45.169633   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 01:00:45.202102   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 01:00:45.234227   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 01:00:45.265901   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 01:00:45.297720   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem --> /usr/share/ca-certificates/14504.pem (1338 bytes)
	I1101 01:00:45.330915   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /usr/share/ca-certificates/145042.pem (1708 bytes)
	I1101 01:00:45.361364   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 01:00:45.391023   58823 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 01:00:45.412643   58823 ssh_runner.go:195] Run: openssl version
	I1101 01:00:45.419938   58823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145042.pem && ln -fs /usr/share/ca-certificates/145042.pem /etc/ssl/certs/145042.pem"
	I1101 01:00:45.433972   58823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145042.pem
	I1101 01:00:45.439966   58823 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:54 /usr/share/ca-certificates/145042.pem
	I1101 01:00:45.440070   58823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145042.pem
	I1101 01:00:45.447248   58823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145042.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 01:00:45.461261   58823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 01:00:45.475166   58823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:00:45.481174   58823 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:00:45.481281   58823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:00:45.488190   58823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 01:00:45.502428   58823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14504.pem && ln -fs /usr/share/ca-certificates/14504.pem /etc/ssl/certs/14504.pem"
	I1101 01:00:45.515353   58823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14504.pem
	I1101 01:00:45.520135   58823 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:54 /usr/share/ca-certificates/14504.pem
	I1101 01:00:45.520196   58823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14504.pem
	I1101 01:00:45.525605   58823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14504.pem /etc/ssl/certs/51391683.0"
	I1101 01:00:45.535886   58823 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 01:00:45.540671   58823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 01:00:45.546973   58823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 01:00:45.554439   58823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 01:00:45.562216   58823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 01:00:45.570082   58823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 01:00:45.578073   58823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 01:00:45.586056   58823 kubeadm.go:404] StartCluster: {Name:old-k8s-version-330042 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-330042 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 01:00:45.586202   58823 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 01:00:45.586270   58823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:00:45.632205   58823 cri.go:89] found id: ""
	I1101 01:00:45.632279   58823 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 01:00:45.646397   58823 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1101 01:00:45.646432   58823 kubeadm.go:636] restartCluster start
	I1101 01:00:45.646492   58823 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 01:00:45.660754   58823 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:45.662302   58823 kubeconfig.go:92] found "old-k8s-version-330042" server: "https://192.168.39.90:8443"
	I1101 01:00:45.665617   58823 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 01:00:45.679127   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:45.679203   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:45.697578   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:45.697601   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:45.697662   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:45.715086   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:46.215841   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:46.215939   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:46.233039   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:46.715162   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:46.715283   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:46.727101   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:47.215409   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:47.215512   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:47.228104   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:43.297105   58730 system_pods.go:59] 9 kube-system pods found
	I1101 01:00:43.452043   58730 system_pods.go:61] "coredns-5dd5756b68-9hvh7" [d7d126c2-c270-452c-b939-15303a174742] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 01:00:43.452062   58730 system_pods.go:61] "coredns-5dd5756b68-gptmc" [fbbb9f17-32d6-456d-8171-eadcf64b11a8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 01:00:43.452074   58730 system_pods.go:61] "etcd-embed-certs-754132" [3c7474c1-788e-461d-bd20-e62c3c12cf27] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 01:00:43.452086   58730 system_pods.go:61] "kube-apiserver-embed-certs-754132" [d218a8d6-536c-400a-b81e-325b89ab475b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 01:00:43.452116   58730 system_pods.go:61] "kube-controller-manager-embed-certs-754132" [930b7861-b807-4f24-ba3c-9365a1e8dd8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 01:00:43.452128   58730 system_pods.go:61] "kube-proxy-d5d5x" [c7a6d923-0b37-452b-9979-0a64c05ee737] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 01:00:43.452142   58730 system_pods.go:61] "kube-scheduler-embed-certs-754132" [fd9c0833-f9d4-41cf-b5dd-b676ea5da6ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 01:00:43.452156   58730 system_pods.go:61] "metrics-server-57f55c9bc5-znchz" [60da0fbf-a2c4-4910-b06b-251b33b8ad0b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:00:43.452169   58730 system_pods.go:61] "storage-provisioner" [fbece4fb-6f83-4f17-acb8-94f493dd72e9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 01:00:43.452185   58730 system_pods.go:74] duration metric: took 1.181683794s to wait for pod list to return data ...
	I1101 01:00:43.452198   58730 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:00:44.181694   58730 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:00:44.181739   58730 node_conditions.go:123] node cpu capacity is 2
	I1101 01:00:44.181756   58730 node_conditions.go:105] duration metric: took 729.549671ms to run NodePressure ...
	I1101 01:00:44.181784   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:45.274729   58730 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.092921592s)
	I1101 01:00:45.274761   58730 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1101 01:00:45.285444   58730 kubeadm.go:787] kubelet initialised
	I1101 01:00:45.285478   58730 kubeadm.go:788] duration metric: took 10.704919ms waiting for restarted kubelet to initialise ...
	I1101 01:00:45.285489   58730 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:00:45.303122   58730 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-9hvh7" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:47.333376   58730 pod_ready.go:92] pod "coredns-5dd5756b68-9hvh7" in "kube-system" namespace has status "Ready":"True"
	I1101 01:00:47.333404   58730 pod_ready.go:81] duration metric: took 2.030252648s waiting for pod "coredns-5dd5756b68-9hvh7" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:47.333415   58730 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-gptmc" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:47.339165   58730 pod_ready.go:92] pod "coredns-5dd5756b68-gptmc" in "kube-system" namespace has status "Ready":"True"
	I1101 01:00:47.339189   58730 pod_ready.go:81] duration metric: took 5.76803ms waiting for pod "coredns-5dd5756b68-gptmc" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:47.339201   58730 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:47.656259   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:47.656733   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:47.656767   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:47.656688   59967 retry.go:31] will retry after 3.546373187s: waiting for machine to come up
	I1101 01:00:47.716219   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:47.716302   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:47.729221   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:48.215453   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:48.215562   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:48.230259   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:48.715905   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:48.716035   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:48.729001   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:49.216123   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:49.216217   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:49.232128   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:49.715640   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:49.715708   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:49.729098   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:50.215271   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:50.215379   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:50.228075   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:50.715151   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:50.715256   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:50.726839   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:51.215204   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:51.215293   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:51.227412   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:51.715753   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:51.715870   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:51.728794   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:52.215318   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:52.215437   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:52.227527   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:48.860188   58730 pod_ready.go:92] pod "etcd-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:00:48.860215   58730 pod_ready.go:81] duration metric: took 1.521005544s waiting for pod "etcd-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:48.860228   58730 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:50.286848   58730 pod_ready.go:92] pod "kube-apiserver-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:00:50.286882   58730 pod_ready.go:81] duration metric: took 1.426640629s waiting for pod "kube-apiserver-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:50.286894   58730 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:51.886531   58730 pod_ready.go:92] pod "kube-controller-manager-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:00:51.886555   58730 pod_ready.go:81] duration metric: took 1.599653882s waiting for pod "kube-controller-manager-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:51.886565   58730 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-d5d5x" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:52.079723   58730 pod_ready.go:92] pod "kube-proxy-d5d5x" in "kube-system" namespace has status "Ready":"True"
	I1101 01:00:52.079752   58730 pod_ready.go:81] duration metric: took 193.181169ms waiting for pod "kube-proxy-d5d5x" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:52.079766   58730 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:51.204423   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:51.204909   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:51.204945   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:51.204854   59967 retry.go:31] will retry after 3.382936792s: waiting for machine to come up
	I1101 01:00:54.588976   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.589398   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Found IP for machine: 192.168.72.97
	I1101 01:00:54.589427   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Reserving static IP address...
	I1101 01:00:54.589447   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has current primary IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.589764   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Reserved static IP address: 192.168.72.97
	I1101 01:00:54.589783   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for SSH to be available...
	I1101 01:00:54.589811   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-639310", mac: "52:54:00:83:e0:44", ip: "192.168.72.97"} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:54.589841   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | skip adding static IP to network mk-default-k8s-diff-port-639310 - found existing host DHCP lease matching {name: "default-k8s-diff-port-639310", mac: "52:54:00:83:e0:44", ip: "192.168.72.97"}
	I1101 01:00:54.589858   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | Getting to WaitForSSH function...
	I1101 01:00:54.591920   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.592295   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:54.592327   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.592518   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | Using SSH client type: external
	I1101 01:00:54.592546   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa (-rw-------)
	I1101 01:00:54.592568   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.97 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1101 01:00:54.592581   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | About to run SSH command:
	I1101 01:00:54.592605   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | exit 0
	I1101 01:00:54.687664   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | SSH cmd err, output: <nil>: 
	I1101 01:00:54.688005   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetConfigRaw
	I1101 01:00:54.688653   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetIP
	I1101 01:00:54.691258   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.691761   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:54.691803   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.692096   59148 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/config.json ...
	I1101 01:00:54.692278   59148 machine.go:88] provisioning docker machine ...
	I1101 01:00:54.692297   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:00:54.692554   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetMachineName
	I1101 01:00:54.692765   59148 buildroot.go:166] provisioning hostname "default-k8s-diff-port-639310"
	I1101 01:00:54.692787   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetMachineName
	I1101 01:00:54.692962   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:54.695491   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.695887   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:54.695917   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.696074   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:54.696280   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:54.696477   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:54.696624   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:54.696817   59148 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:54.697275   59148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.97 22 <nil> <nil>}
	I1101 01:00:54.697298   59148 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-639310 && echo "default-k8s-diff-port-639310" | sudo tee /etc/hostname
	I1101 01:00:54.836084   59148 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-639310
	
	I1101 01:00:54.836118   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:54.839109   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.839437   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:54.839463   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.839732   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:54.839986   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:54.840131   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:54.840290   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:54.840501   59148 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:54.840865   59148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.97 22 <nil> <nil>}
	I1101 01:00:54.840885   59148 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-639310' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-639310/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-639310' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 01:00:54.979804   59148 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 01:00:54.979841   59148 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7305/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7305/.minikube}
	I1101 01:00:54.979870   59148 buildroot.go:174] setting up certificates
	I1101 01:00:54.979881   59148 provision.go:83] configureAuth start
	I1101 01:00:54.979898   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetMachineName
	I1101 01:00:54.980246   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetIP
	I1101 01:00:54.983397   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.983760   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:54.983794   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.984029   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:54.986746   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.987112   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:54.987160   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.987328   59148 provision.go:138] copyHostCerts
	I1101 01:00:54.987418   59148 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem, removing ...
	I1101 01:00:54.987437   59148 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 01:00:54.987507   59148 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem (1078 bytes)
	I1101 01:00:54.987619   59148 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem, removing ...
	I1101 01:00:54.987628   59148 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 01:00:54.987651   59148 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem (1123 bytes)
	I1101 01:00:54.987707   59148 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem, removing ...
	I1101 01:00:54.987714   59148 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 01:00:54.987731   59148 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem (1675 bytes)
	I1101 01:00:54.987773   59148 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-639310 san=[192.168.72.97 192.168.72.97 localhost 127.0.0.1 minikube default-k8s-diff-port-639310]
	I1101 01:00:56.081549   58676 start.go:369] acquired machines lock for "no-preload-008483" in 57.600332472s
	I1101 01:00:56.081600   58676 start.go:96] Skipping create...Using existing machine configuration
	I1101 01:00:56.081611   58676 fix.go:54] fixHost starting: 
	I1101 01:00:56.082003   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:00:56.082041   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:00:56.098896   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33091
	I1101 01:00:56.099300   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:00:56.099786   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:00:56.099817   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:00:56.100159   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:00:56.100364   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:00:56.100511   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetState
	I1101 01:00:56.104041   58676 fix.go:102] recreateIfNeeded on no-preload-008483: state=Stopped err=<nil>
	I1101 01:00:56.104071   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	W1101 01:00:56.104250   58676 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 01:00:56.106287   58676 out.go:177] * Restarting existing kvm2 VM for "no-preload-008483" ...
	I1101 01:00:52.715585   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:52.715665   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:52.726877   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:53.216119   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:53.216202   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:53.228700   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:53.715253   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:53.715342   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:53.729029   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:54.215451   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:54.215554   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:54.228217   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:54.715451   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:54.715513   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:54.727356   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:55.216034   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:55.216130   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:55.227905   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:55.680067   58823 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1101 01:00:55.680120   58823 kubeadm.go:1128] stopping kube-system containers ...
	I1101 01:00:55.680135   58823 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 01:00:55.680204   58823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:00:55.726658   58823 cri.go:89] found id: ""
	I1101 01:00:55.726744   58823 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 01:00:55.748477   58823 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:00:55.758933   58823 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:00:55.759013   58823 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:00:55.769130   58823 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1101 01:00:55.769156   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:55.911136   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:57.164062   58823 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.252874473s)
	I1101 01:00:57.164095   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:57.403267   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:55.270327   59148 provision.go:172] copyRemoteCerts
	I1101 01:00:55.270394   59148 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 01:00:55.270418   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:55.272988   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.273410   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:55.273444   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.273609   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:55.273818   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:55.273966   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:55.274113   59148 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa Username:docker}
	I1101 01:00:55.367354   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 01:00:55.391069   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1101 01:00:55.413001   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 01:00:55.436904   59148 provision.go:86] duration metric: configureAuth took 457.006108ms
	I1101 01:00:55.436930   59148 buildroot.go:189] setting minikube options for container-runtime
	I1101 01:00:55.437115   59148 config.go:182] Loaded profile config "default-k8s-diff-port-639310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:00:55.437187   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:55.440286   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.440627   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:55.440662   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.440789   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:55.440989   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:55.441187   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:55.441330   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:55.441491   59148 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:55.441905   59148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.97 22 <nil> <nil>}
	I1101 01:00:55.441928   59148 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 01:00:55.788340   59148 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 01:00:55.788372   59148 machine.go:91] provisioned docker machine in 1.096081387s
	I1101 01:00:55.788386   59148 start.go:300] post-start starting for "default-k8s-diff-port-639310" (driver="kvm2")
	I1101 01:00:55.788401   59148 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 01:00:55.788443   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:00:55.788777   59148 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 01:00:55.788846   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:55.792110   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.792594   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:55.792626   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.792829   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:55.793080   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:55.793273   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:55.793421   59148 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa Username:docker}
	I1101 01:00:55.893108   59148 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 01:00:55.898425   59148 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 01:00:55.898452   59148 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/addons for local assets ...
	I1101 01:00:55.898530   59148 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/files for local assets ...
	I1101 01:00:55.898619   59148 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> 145042.pem in /etc/ssl/certs
	I1101 01:00:55.898751   59148 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 01:00:55.909396   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:00:55.943412   59148 start.go:303] post-start completed in 154.998365ms
	I1101 01:00:55.943440   59148 fix.go:56] fixHost completed within 20.309363198s
	I1101 01:00:55.943464   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:55.946417   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.946777   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:55.946810   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.947048   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:55.947268   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:55.947484   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:55.947662   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:55.947849   59148 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:55.948212   59148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.97 22 <nil> <nil>}
	I1101 01:00:55.948225   59148 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1101 01:00:56.081387   59148 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698800456.033536949
	
	I1101 01:00:56.081411   59148 fix.go:206] guest clock: 1698800456.033536949
	I1101 01:00:56.081422   59148 fix.go:219] Guest: 2023-11-01 01:00:56.033536949 +0000 UTC Remote: 2023-11-01 01:00:55.943445038 +0000 UTC m=+270.963710441 (delta=90.091911ms)
	I1101 01:00:56.081446   59148 fix.go:190] guest clock delta is within tolerance: 90.091911ms
	I1101 01:00:56.081451   59148 start.go:83] releasing machines lock for "default-k8s-diff-port-639310", held for 20.447404197s
	I1101 01:00:56.081484   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:00:56.081826   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetIP
	I1101 01:00:56.084827   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:56.085289   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:56.085326   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:56.085543   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:00:56.086049   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:00:56.086272   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:00:56.086374   59148 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 01:00:56.086425   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:56.086677   59148 ssh_runner.go:195] Run: cat /version.json
	I1101 01:00:56.086709   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:56.089377   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:56.089696   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:56.089784   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:56.089841   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:56.090077   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:56.090088   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:56.090108   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:56.090256   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:56.090329   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:56.090405   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:56.090479   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:56.090557   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:56.090613   59148 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa Username:docker}
	I1101 01:00:56.090681   59148 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa Username:docker}
	I1101 01:00:56.220669   59148 ssh_runner.go:195] Run: systemctl --version
	I1101 01:00:56.226971   59148 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 01:00:56.375845   59148 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 01:00:56.383893   59148 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 01:00:56.383986   59148 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 01:00:56.404009   59148 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 01:00:56.404035   59148 start.go:472] detecting cgroup driver to use...
	I1101 01:00:56.404107   59148 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 01:00:56.420015   59148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 01:00:56.435577   59148 docker.go:204] disabling cri-docker service (if available) ...
	I1101 01:00:56.435652   59148 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 01:00:56.448542   59148 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 01:00:56.465197   59148 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 01:00:56.607142   59148 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 01:00:56.739287   59148 docker.go:220] disabling docker service ...
	I1101 01:00:56.739366   59148 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 01:00:56.753861   59148 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 01:00:56.768891   59148 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 01:00:56.893929   59148 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 01:00:57.022891   59148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 01:00:57.039063   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 01:00:57.058893   59148 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 01:00:57.058964   59148 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:57.070769   59148 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 01:00:57.070845   59148 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:57.082528   59148 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:57.094350   59148 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:57.105953   59148 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 01:00:57.117745   59148 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 01:00:57.128493   59148 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 01:00:57.128553   59148 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 01:00:57.145858   59148 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 01:00:57.157318   59148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 01:00:57.288371   59148 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 01:00:57.489356   59148 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 01:00:57.489458   59148 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 01:00:57.495837   59148 start.go:540] Will wait 60s for crictl version
	I1101 01:00:57.495907   59148 ssh_runner.go:195] Run: which crictl
	I1101 01:00:57.500572   59148 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 01:00:57.546076   59148 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1101 01:00:57.546245   59148 ssh_runner.go:195] Run: crio --version
	I1101 01:00:57.601745   59148 ssh_runner.go:195] Run: crio --version
	I1101 01:00:57.664097   59148 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1101 01:00:54.387561   58730 pod_ready.go:102] pod "kube-scheduler-embed-certs-754132" in "kube-system" namespace has status "Ready":"False"
	I1101 01:00:56.388062   58730 pod_ready.go:92] pod "kube-scheduler-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:00:56.388085   58730 pod_ready.go:81] duration metric: took 4.308312567s waiting for pod "kube-scheduler-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:56.388094   58730 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:57.666096   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetIP
	I1101 01:00:57.670028   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:57.670437   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:57.670472   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:57.670760   59148 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1101 01:00:57.675850   59148 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:00:57.689379   59148 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 01:00:57.689439   59148 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:00:57.736333   59148 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1101 01:00:57.736404   59148 ssh_runner.go:195] Run: which lz4
	I1101 01:00:57.740532   59148 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1101 01:00:57.745488   59148 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 01:00:57.745535   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1101 01:00:59.649981   59148 crio.go:444] Took 1.909486 seconds to copy over tarball
	I1101 01:00:59.650070   59148 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 01:00:56.107642   58676 main.go:141] libmachine: (no-preload-008483) Calling .Start
	I1101 01:00:56.107815   58676 main.go:141] libmachine: (no-preload-008483) Ensuring networks are active...
	I1101 01:00:56.108696   58676 main.go:141] libmachine: (no-preload-008483) Ensuring network default is active
	I1101 01:00:56.109190   58676 main.go:141] libmachine: (no-preload-008483) Ensuring network mk-no-preload-008483 is active
	I1101 01:00:56.109623   58676 main.go:141] libmachine: (no-preload-008483) Getting domain xml...
	I1101 01:00:56.110400   58676 main.go:141] libmachine: (no-preload-008483) Creating domain...
	I1101 01:00:57.626479   58676 main.go:141] libmachine: (no-preload-008483) Waiting to get IP...
	I1101 01:00:57.627653   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:00:57.628245   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:00:57.628315   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:00:57.628210   60142 retry.go:31] will retry after 306.868541ms: waiting for machine to come up
	I1101 01:00:57.936854   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:00:57.937358   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:00:57.937392   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:00:57.937309   60142 retry.go:31] will retry after 366.94808ms: waiting for machine to come up
	I1101 01:00:58.306219   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:00:58.306880   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:00:58.306909   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:00:58.306815   60142 retry.go:31] will retry after 470.784378ms: waiting for machine to come up
	I1101 01:00:58.781164   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:00:58.781784   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:00:58.781810   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:00:58.781686   60142 retry.go:31] will retry after 475.883045ms: waiting for machine to come up
	I1101 01:00:59.259400   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:00:59.259922   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:00:59.259964   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:00:59.259816   60142 retry.go:31] will retry after 533.372113ms: waiting for machine to come up
	I1101 01:00:59.794619   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:00:59.795307   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:00:59.795335   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:00:59.795222   60142 retry.go:31] will retry after 643.335947ms: waiting for machine to come up
	I1101 01:01:00.440339   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:00.440876   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:01:00.440901   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:01:00.440795   60142 retry.go:31] will retry after 899.488876ms: waiting for machine to come up
	I1101 01:00:57.545316   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:57.641733   58823 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:00:57.641812   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:57.655826   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:58.173767   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:58.674113   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:59.174394   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:59.674240   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:59.705758   58823 api_server.go:72] duration metric: took 2.064024888s to wait for apiserver process to appear ...
	I1101 01:00:59.705791   58823 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:00:59.705814   58823 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I1101 01:00:58.517913   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:00.993028   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:03.059373   59148 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.409271602s)
	I1101 01:01:03.059403   59148 crio.go:451] Took 3.409395 seconds to extract the tarball
	I1101 01:01:03.059413   59148 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 01:01:03.101818   59148 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:01:03.153263   59148 crio.go:496] all images are preloaded for cri-o runtime.
	I1101 01:01:03.153284   59148 cache_images.go:84] Images are preloaded, skipping loading
	I1101 01:01:03.153341   59148 ssh_runner.go:195] Run: crio config
	I1101 01:01:03.228205   59148 cni.go:84] Creating CNI manager for ""
	I1101 01:01:03.228225   59148 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:01:03.228241   59148 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 01:01:03.228265   59148 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.97 APIServerPort:8444 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-639310 NodeName:default-k8s-diff-port-639310 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 01:01:03.228386   59148 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.97
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-639310"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 01:01:03.228463   59148 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-639310 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-639310 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1101 01:01:03.228517   59148 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 01:01:03.240926   59148 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 01:01:03.241014   59148 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 01:01:03.253440   59148 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I1101 01:01:03.271480   59148 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 01:01:03.292784   59148 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I1101 01:01:03.315295   59148 ssh_runner.go:195] Run: grep 192.168.72.97	control-plane.minikube.internal$ /etc/hosts
	I1101 01:01:03.319922   59148 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:01:03.332820   59148 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310 for IP: 192.168.72.97
	I1101 01:01:03.332869   59148 certs.go:190] acquiring lock for shared ca certs: {Name:mk6185b3938c4b51e80f57b7c81926c81b632b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:01:03.333015   59148 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key
	I1101 01:01:03.333067   59148 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key
	I1101 01:01:03.333174   59148 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/client.key
	I1101 01:01:03.333255   59148 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/apiserver.key.6d6df538
	I1101 01:01:03.333307   59148 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/proxy-client.key
	I1101 01:01:03.333469   59148 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem (1338 bytes)
	W1101 01:01:03.333531   59148 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504_empty.pem, impossibly tiny 0 bytes
	I1101 01:01:03.333542   59148 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 01:01:03.333580   59148 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem (1078 bytes)
	I1101 01:01:03.333632   59148 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem (1123 bytes)
	I1101 01:01:03.333699   59148 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem (1675 bytes)
	I1101 01:01:03.333761   59148 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:01:03.334633   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 01:01:03.361740   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 01:01:03.387535   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 01:01:03.414252   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 01:01:03.438492   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 01:01:03.463501   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 01:01:03.489800   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 01:01:03.517317   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 01:01:03.543330   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem --> /usr/share/ca-certificates/14504.pem (1338 bytes)
	I1101 01:01:03.567744   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /usr/share/ca-certificates/145042.pem (1708 bytes)
	I1101 01:01:03.594230   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 01:01:03.620857   59148 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 01:01:03.638676   59148 ssh_runner.go:195] Run: openssl version
	I1101 01:01:03.644139   59148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14504.pem && ln -fs /usr/share/ca-certificates/14504.pem /etc/ssl/certs/14504.pem"
	I1101 01:01:03.654667   59148 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14504.pem
	I1101 01:01:03.659261   59148 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:54 /usr/share/ca-certificates/14504.pem
	I1101 01:01:03.659322   59148 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14504.pem
	I1101 01:01:03.664592   59148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14504.pem /etc/ssl/certs/51391683.0"
	I1101 01:01:03.675482   59148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145042.pem && ln -fs /usr/share/ca-certificates/145042.pem /etc/ssl/certs/145042.pem"
	I1101 01:01:03.687903   59148 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145042.pem
	I1101 01:01:03.692901   59148 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:54 /usr/share/ca-certificates/145042.pem
	I1101 01:01:03.692970   59148 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145042.pem
	I1101 01:01:03.698691   59148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145042.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 01:01:03.709971   59148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 01:01:03.720612   59148 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:01:03.725306   59148 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:01:03.725397   59148 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:01:03.731004   59148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 01:01:03.743558   59148 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 01:01:03.748428   59148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 01:01:03.754404   59148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 01:01:03.760210   59148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 01:01:03.765964   59148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 01:01:03.771813   59148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 01:01:03.777659   59148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 01:01:03.783754   59148 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-639310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.3 ClusterName:default-k8s-diff-port-639310 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.97 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 01:01:03.783846   59148 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 01:01:03.783903   59148 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:01:03.823390   59148 cri.go:89] found id: ""
	I1101 01:01:03.823473   59148 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 01:01:03.835317   59148 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1101 01:01:03.835339   59148 kubeadm.go:636] restartCluster start
	I1101 01:01:03.835393   59148 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 01:01:03.845532   59148 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:03.846629   59148 kubeconfig.go:92] found "default-k8s-diff-port-639310" server: "https://192.168.72.97:8444"
	I1101 01:01:03.849176   59148 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 01:01:03.859318   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:03.859387   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:03.871598   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:03.871620   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:03.871682   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:03.882903   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:04.383593   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:04.383684   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:04.398424   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:04.883982   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:04.884095   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:04.901344   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:01.341708   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:01.342186   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:01:01.342216   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:01:01.342138   60142 retry.go:31] will retry after 1.416825478s: waiting for machine to come up
	I1101 01:01:02.760851   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:02.761364   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:01:02.761391   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:01:02.761319   60142 retry.go:31] will retry after 1.783291063s: waiting for machine to come up
	I1101 01:01:04.546179   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:04.546731   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:01:04.546768   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:01:04.546684   60142 retry.go:31] will retry after 1.94150512s: waiting for machine to come up
	I1101 01:01:04.706156   58823 api_server.go:269] stopped: https://192.168.39.90:8443/healthz: Get "https://192.168.39.90:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 01:01:04.706226   58823 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I1101 01:01:05.474195   58823 api_server.go:279] https://192.168.39.90:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 01:01:05.474233   58823 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 01:01:05.975031   58823 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I1101 01:01:05.981753   58823 api_server.go:279] https://192.168.39.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1101 01:01:05.981796   58823 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1101 01:01:06.474331   58823 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I1101 01:01:06.483910   58823 api_server.go:279] https://192.168.39.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1101 01:01:06.483971   58823 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1101 01:01:06.974478   58823 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I1101 01:01:06.983225   58823 api_server.go:279] https://192.168.39.90:8443/healthz returned 200:
	ok
	I1101 01:01:06.992078   58823 api_server.go:141] control plane version: v1.16.0
	I1101 01:01:06.992104   58823 api_server.go:131] duration metric: took 7.286307099s to wait for apiserver health ...
	I1101 01:01:06.992112   58823 cni.go:84] Creating CNI manager for ""
	I1101 01:01:06.992118   58823 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:01:06.994180   58823 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:01:06.995961   58823 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:01:07.007478   58823 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:01:07.025029   58823 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:01:07.036645   58823 system_pods.go:59] 7 kube-system pods found
	I1101 01:01:07.036685   58823 system_pods.go:61] "coredns-5644d7b6d9-swhtm" [5c5eacff-9271-46c5-add0-a3931b67876b] Running
	I1101 01:01:07.036692   58823 system_pods.go:61] "etcd-old-k8s-version-330042" [0b703394-0d1c-419d-8e08-c2c299046293] Running
	I1101 01:01:07.036699   58823 system_pods.go:61] "kube-apiserver-old-k8s-version-330042" [0dcb0028-fa22-4107-afa1-fbdd14b615ab] Running
	I1101 01:01:07.036706   58823 system_pods.go:61] "kube-controller-manager-old-k8s-version-330042" [adc1372e-45e1-4365-a039-c06af715cb24] Running
	I1101 01:01:07.036712   58823 system_pods.go:61] "kube-proxy-h86m8" [6db2c8ff-26f9-4f22-9cbd-2405a81d9128] Running
	I1101 01:01:07.036718   58823 system_pods.go:61] "kube-scheduler-old-k8s-version-330042" [f3f78aa9-fcb1-4b87-b7fa-f86c44e801c0] Running
	I1101 01:01:07.036724   58823 system_pods.go:61] "storage-provisioner" [710e45b8-dab7-4bbc-9ce8-f607db4cb63e] Running
	I1101 01:01:07.036733   58823 system_pods.go:74] duration metric: took 11.681153ms to wait for pod list to return data ...
	I1101 01:01:07.036745   58823 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:01:07.043383   58823 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:01:07.043420   58823 node_conditions.go:123] node cpu capacity is 2
	I1101 01:01:07.043433   58823 node_conditions.go:105] duration metric: took 6.681589ms to run NodePressure ...
	I1101 01:01:07.043454   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:07.419893   58823 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1101 01:01:07.425342   58823 retry.go:31] will retry after 365.112122ms: kubelet not initialised
	I1101 01:01:03.491770   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:05.989935   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:05.383225   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:05.383308   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:05.399889   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:05.884036   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:05.884134   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:05.899867   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:06.383118   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:06.383241   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:06.399285   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:06.883379   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:06.883497   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:06.895160   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:07.383835   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:07.383951   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:07.401766   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:07.883254   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:07.883368   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:07.900024   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:08.383405   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:08.383494   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:08.401659   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:08.883099   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:08.883189   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:08.898348   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:09.383858   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:09.384003   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:09.396380   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:09.884003   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:09.884128   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:09.901031   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:06.489565   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:06.490176   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:01:06.490200   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:01:06.490117   60142 retry.go:31] will retry after 2.694877407s: waiting for machine to come up
	I1101 01:01:09.186086   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:09.186554   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:01:09.186584   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:01:09.186497   60142 retry.go:31] will retry after 2.651563817s: waiting for machine to come up
	I1101 01:01:07.799240   58823 retry.go:31] will retry after 519.025086ms: kubelet not initialised
	I1101 01:01:08.325024   58823 retry.go:31] will retry after 345.44325ms: kubelet not initialised
	I1101 01:01:08.674686   58823 retry.go:31] will retry after 665.113314ms: kubelet not initialised
	I1101 01:01:09.345867   58823 retry.go:31] will retry after 1.421023017s: kubelet not initialised
	I1101 01:01:10.773100   58823 retry.go:31] will retry after 1.15707988s: kubelet not initialised
	I1101 01:01:11.936215   58823 retry.go:31] will retry after 3.290674523s: kubelet not initialised
	I1101 01:01:08.490229   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:10.990967   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:12.991285   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:10.383739   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:10.383800   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:10.398972   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:10.882991   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:10.883089   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:10.897346   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:11.383976   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:11.384059   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:11.396332   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:11.883903   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:11.884020   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:11.897279   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:12.383675   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:12.383786   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:12.399623   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:12.883112   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:12.883191   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:12.895484   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:13.383069   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:13.383181   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:13.395417   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:13.860229   59148 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1101 01:01:13.860262   59148 kubeadm.go:1128] stopping kube-system containers ...
	I1101 01:01:13.860277   59148 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 01:01:13.860360   59148 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:01:13.901712   59148 cri.go:89] found id: ""
	I1101 01:01:13.901809   59148 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 01:01:13.918956   59148 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:01:13.931401   59148 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:01:13.931477   59148 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:01:13.943486   59148 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1101 01:01:13.943512   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:14.077324   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:11.839684   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:11.840140   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:01:11.840169   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:01:11.840105   60142 retry.go:31] will retry after 4.157820096s: waiting for machine to come up
	I1101 01:01:15.233157   58823 retry.go:31] will retry after 3.531336164s: kubelet not initialised
	I1101 01:01:15.490358   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:17.491953   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:16.001208   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.001765   58676 main.go:141] libmachine: (no-preload-008483) Found IP for machine: 192.168.50.140
	I1101 01:01:16.001790   58676 main.go:141] libmachine: (no-preload-008483) Reserving static IP address...
	I1101 01:01:16.001806   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has current primary IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.002298   58676 main.go:141] libmachine: (no-preload-008483) Reserved static IP address: 192.168.50.140
	I1101 01:01:16.002338   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "no-preload-008483", mac: "52:54:00:6c:aa:b5", ip: "192.168.50.140"} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.002357   58676 main.go:141] libmachine: (no-preload-008483) Waiting for SSH to be available...
	I1101 01:01:16.002381   58676 main.go:141] libmachine: (no-preload-008483) DBG | skip adding static IP to network mk-no-preload-008483 - found existing host DHCP lease matching {name: "no-preload-008483", mac: "52:54:00:6c:aa:b5", ip: "192.168.50.140"}
	I1101 01:01:16.002397   58676 main.go:141] libmachine: (no-preload-008483) DBG | Getting to WaitForSSH function...
	I1101 01:01:16.004912   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.005349   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.005387   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.005528   58676 main.go:141] libmachine: (no-preload-008483) DBG | Using SSH client type: external
	I1101 01:01:16.005550   58676 main.go:141] libmachine: (no-preload-008483) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa (-rw-------)
	I1101 01:01:16.005589   58676 main.go:141] libmachine: (no-preload-008483) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.140 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1101 01:01:16.005607   58676 main.go:141] libmachine: (no-preload-008483) DBG | About to run SSH command:
	I1101 01:01:16.005621   58676 main.go:141] libmachine: (no-preload-008483) DBG | exit 0
	I1101 01:01:16.100131   58676 main.go:141] libmachine: (no-preload-008483) DBG | SSH cmd err, output: <nil>: 
	I1101 01:01:16.100576   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetConfigRaw
	I1101 01:01:16.101304   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetIP
	I1101 01:01:16.104212   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.104482   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.104528   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.104710   58676 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/config.json ...
	I1101 01:01:16.104933   58676 machine.go:88] provisioning docker machine ...
	I1101 01:01:16.104951   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:01:16.105159   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetMachineName
	I1101 01:01:16.105351   58676 buildroot.go:166] provisioning hostname "no-preload-008483"
	I1101 01:01:16.105375   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetMachineName
	I1101 01:01:16.105551   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:16.107936   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.108287   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.108333   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.108422   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:16.108594   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:16.108734   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:16.108861   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:16.109041   58676 main.go:141] libmachine: Using SSH client type: native
	I1101 01:01:16.109531   58676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I1101 01:01:16.109557   58676 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-008483 && echo "no-preload-008483" | sudo tee /etc/hostname
	I1101 01:01:16.249893   58676 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-008483
	
	I1101 01:01:16.249924   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:16.253130   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.253531   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.253571   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.253879   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:16.254106   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:16.254304   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:16.254441   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:16.254608   58676 main.go:141] libmachine: Using SSH client type: native
	I1101 01:01:16.254965   58676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I1101 01:01:16.254987   58676 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-008483' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-008483/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-008483' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 01:01:16.386797   58676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 01:01:16.386834   58676 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7305/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7305/.minikube}
	I1101 01:01:16.386862   58676 buildroot.go:174] setting up certificates
	I1101 01:01:16.386870   58676 provision.go:83] configureAuth start
	I1101 01:01:16.386879   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetMachineName
	I1101 01:01:16.387149   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetIP
	I1101 01:01:16.390409   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.390812   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.390844   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.391055   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:16.393580   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.394122   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.394154   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.394352   58676 provision.go:138] copyHostCerts
	I1101 01:01:16.394425   58676 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem, removing ...
	I1101 01:01:16.394438   58676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 01:01:16.394506   58676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem (1078 bytes)
	I1101 01:01:16.394646   58676 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem, removing ...
	I1101 01:01:16.394658   58676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 01:01:16.394690   58676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem (1123 bytes)
	I1101 01:01:16.394774   58676 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem, removing ...
	I1101 01:01:16.394786   58676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 01:01:16.394811   58676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem (1675 bytes)
	I1101 01:01:16.394874   58676 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem org=jenkins.no-preload-008483 san=[192.168.50.140 192.168.50.140 localhost 127.0.0.1 minikube no-preload-008483]
	I1101 01:01:16.593958   58676 provision.go:172] copyRemoteCerts
	I1101 01:01:16.594024   58676 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 01:01:16.594046   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:16.597073   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.597449   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.597484   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.597723   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:16.597956   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:16.598108   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:16.598247   58676 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa Username:docker}
	I1101 01:01:16.689574   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 01:01:16.714820   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1101 01:01:16.744383   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 01:01:16.769305   58676 provision.go:86] duration metric: configureAuth took 382.416455ms
	I1101 01:01:16.769338   58676 buildroot.go:189] setting minikube options for container-runtime
	I1101 01:01:16.769596   58676 config.go:182] Loaded profile config "no-preload-008483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:01:16.769692   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:16.773209   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.773565   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.773628   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.773828   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:16.774071   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:16.774353   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:16.774570   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:16.774772   58676 main.go:141] libmachine: Using SSH client type: native
	I1101 01:01:16.775132   58676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I1101 01:01:16.775150   58676 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 01:01:17.110397   58676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 01:01:17.110481   58676 machine.go:91] provisioned docker machine in 1.005532035s
	I1101 01:01:17.110500   58676 start.go:300] post-start starting for "no-preload-008483" (driver="kvm2")
	I1101 01:01:17.110513   58676 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 01:01:17.110559   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:01:17.110920   58676 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 01:01:17.110948   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:17.114342   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.114794   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:17.114829   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.115028   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:17.115227   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:17.115440   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:17.115621   58676 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa Username:docker}
	I1101 01:01:17.210514   58676 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 01:01:17.216393   58676 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 01:01:17.216423   58676 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/addons for local assets ...
	I1101 01:01:17.216522   58676 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/files for local assets ...
	I1101 01:01:17.216640   58676 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> 145042.pem in /etc/ssl/certs
	I1101 01:01:17.216773   58676 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 01:01:17.229604   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:01:17.255095   58676 start.go:303] post-start completed in 144.577436ms
	I1101 01:01:17.255120   58676 fix.go:56] fixHost completed within 21.173509578s
	I1101 01:01:17.255192   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:17.258433   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.258833   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:17.258858   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.259085   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:17.259305   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:17.259478   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:17.259628   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:17.259825   58676 main.go:141] libmachine: Using SSH client type: native
	I1101 01:01:17.260306   58676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I1101 01:01:17.260321   58676 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1101 01:01:17.389718   58676 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698800477.337229135
	
	I1101 01:01:17.389748   58676 fix.go:206] guest clock: 1698800477.337229135
	I1101 01:01:17.389770   58676 fix.go:219] Guest: 2023-11-01 01:01:17.337229135 +0000 UTC Remote: 2023-11-01 01:01:17.255124581 +0000 UTC m=+361.362725964 (delta=82.104554ms)
	I1101 01:01:17.389797   58676 fix.go:190] guest clock delta is within tolerance: 82.104554ms
	I1101 01:01:17.389804   58676 start.go:83] releasing machines lock for "no-preload-008483", held for 21.308227601s
	I1101 01:01:17.389828   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:01:17.390149   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetIP
	I1101 01:01:17.393289   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.393692   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:17.393723   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.393937   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:01:17.394589   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:01:17.394780   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:01:17.394877   58676 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 01:01:17.394918   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:17.395060   58676 ssh_runner.go:195] Run: cat /version.json
	I1101 01:01:17.395115   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:17.398497   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:17.398497   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.398581   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:17.398642   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.398665   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.398700   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:17.398853   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:17.398861   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:17.398881   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.398995   58676 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa Username:docker}
	I1101 01:01:17.399475   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:17.399644   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:17.399798   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:17.399976   58676 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa Username:docker}
	I1101 01:01:17.524462   58676 ssh_runner.go:195] Run: systemctl --version
	I1101 01:01:17.530328   58676 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 01:01:17.678956   58676 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 01:01:17.686754   58676 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 01:01:17.686834   58676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 01:01:17.705358   58676 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 01:01:17.705388   58676 start.go:472] detecting cgroup driver to use...
	I1101 01:01:17.705527   58676 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 01:01:17.722410   58676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 01:01:17.739380   58676 docker.go:204] disabling cri-docker service (if available) ...
	I1101 01:01:17.739443   58676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 01:01:17.755953   58676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 01:01:17.769672   58676 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 01:01:17.900801   58676 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 01:01:18.027283   58676 docker.go:220] disabling docker service ...
	I1101 01:01:18.027378   58676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 01:01:18.041230   58676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 01:01:18.052784   58676 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 01:01:18.165341   58676 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 01:01:18.276403   58676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 01:01:18.289618   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 01:01:18.308480   58676 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 01:01:18.308562   58676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:01:18.318597   58676 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 01:01:18.318673   58676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:01:18.328312   58676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:01:18.340054   58676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:01:18.351854   58676 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 01:01:18.364129   58676 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 01:01:18.372789   58676 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 01:01:18.372879   58676 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 01:01:18.385792   58676 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 01:01:18.394803   58676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 01:01:18.503941   58676 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 01:01:18.687034   58676 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 01:01:18.687137   58676 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 01:01:18.691750   58676 start.go:540] Will wait 60s for crictl version
	I1101 01:01:18.691818   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:18.695752   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 01:01:18.735012   58676 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1101 01:01:18.735098   58676 ssh_runner.go:195] Run: crio --version
	I1101 01:01:18.782835   58676 ssh_runner.go:195] Run: crio --version
	I1101 01:01:18.829727   58676 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1101 01:01:15.054547   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:15.248625   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:15.325492   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:15.396782   59148 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:01:15.396854   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:15.420220   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:15.941271   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:16.441997   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:16.942240   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:17.441850   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:17.941784   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:17.965191   59148 api_server.go:72] duration metric: took 2.5684081s to wait for apiserver process to appear ...
	I1101 01:01:17.965220   59148 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:01:17.965238   59148 api_server.go:253] Checking apiserver healthz at https://192.168.72.97:8444/healthz ...
	I1101 01:01:18.831303   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetIP
	I1101 01:01:18.834574   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:18.834969   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:18.835003   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:18.835233   58676 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1101 01:01:18.839259   58676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:01:18.853665   58676 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 01:01:18.853725   58676 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:01:18.890995   58676 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1101 01:01:18.891024   58676 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.3 registry.k8s.io/kube-controller-manager:v1.28.3 registry.k8s.io/kube-scheduler:v1.28.3 registry.k8s.io/kube-proxy:v1.28.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1101 01:01:18.891130   58676 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1101 01:01:18.891143   58676 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.3
	I1101 01:01:18.891144   58676 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1101 01:01:18.891201   58676 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1101 01:01:18.891263   58676 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.3
	I1101 01:01:18.891397   58676 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.3
	I1101 01:01:18.891415   58676 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1101 01:01:18.891134   58676 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:01:18.892729   58676 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.3
	I1101 01:01:18.892742   58676 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:01:18.892747   58676 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1101 01:01:18.892760   58676 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1101 01:01:18.892760   58676 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1101 01:01:18.892729   58676 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.3
	I1101 01:01:18.892790   58676 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.3
	I1101 01:01:18.892835   58676 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1101 01:01:19.112836   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1101 01:01:19.131170   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.3
	I1101 01:01:19.147328   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.3
	I1101 01:01:19.148513   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I1101 01:01:19.155909   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.3
	I1101 01:01:19.163598   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.3
	I1101 01:01:19.166436   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I1101 01:01:19.290823   58676 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.3" needs transfer: "registry.k8s.io/kube-proxy:v1.28.3" does not exist at hash "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf" in container runtime
	I1101 01:01:19.290888   58676 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.3
	I1101 01:01:19.290943   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:19.331622   58676 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.3" does not exist at hash "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076" in container runtime
	I1101 01:01:19.331709   58676 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.3" does not exist at hash "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4" in container runtime
	I1101 01:01:19.331776   58676 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.3
	I1101 01:01:19.331717   58676 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.3
	I1101 01:01:19.331872   58676 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.3" does not exist at hash "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3" in container runtime
	I1101 01:01:19.331899   58676 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1101 01:01:19.331905   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:19.331645   58676 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I1101 01:01:19.331979   58676 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I1101 01:01:19.331986   58676 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I1101 01:01:19.332011   58676 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I1101 01:01:19.332023   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:19.331945   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:19.332053   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:19.332040   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.3
	I1101 01:01:19.331842   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:19.342099   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.3
	I1101 01:01:19.396521   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I1101 01:01:19.396603   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.3
	I1101 01:01:19.396612   58676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3
	I1101 01:01:19.396628   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.3
	I1101 01:01:19.396681   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I1101 01:01:19.396700   58676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.3
	I1101 01:01:19.396750   58676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3
	I1101 01:01:19.396842   58676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1101 01:01:19.497732   58676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.3 (exists)
	I1101 01:01:19.497756   58676 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.3
	I1101 01:01:19.497784   58676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I1101 01:01:19.497813   58676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3
	I1101 01:01:19.497871   58676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I1101 01:01:19.497924   58676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3
	I1101 01:01:19.497964   58676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.3 (exists)
	I1101 01:01:19.498009   58676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1101 01:01:19.498015   58676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3
	I1101 01:01:19.498054   58676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I1101 01:01:19.498111   58676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I1101 01:01:19.498117   58676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1101 01:01:19.764214   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:01:18.769797   58823 retry.go:31] will retry after 5.956460089s: kubelet not initialised
	I1101 01:01:19.987384   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:21.989585   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:22.277798   59148 api_server.go:279] https://192.168.72.97:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 01:01:22.277829   59148 api_server.go:103] status: https://192.168.72.97:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 01:01:22.277839   59148 api_server.go:253] Checking apiserver healthz at https://192.168.72.97:8444/healthz ...
	I1101 01:01:22.371756   59148 api_server.go:279] https://192.168.72.97:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 01:01:22.371796   59148 api_server.go:103] status: https://192.168.72.97:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 01:01:22.872332   59148 api_server.go:253] Checking apiserver healthz at https://192.168.72.97:8444/healthz ...
	I1101 01:01:22.884543   59148 api_server.go:279] https://192.168.72.97:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:01:22.884587   59148 api_server.go:103] status: https://192.168.72.97:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:01:23.372033   59148 api_server.go:253] Checking apiserver healthz at https://192.168.72.97:8444/healthz ...
	I1101 01:01:23.381608   59148 api_server.go:279] https://192.168.72.97:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:01:23.381657   59148 api_server.go:103] status: https://192.168.72.97:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:01:23.872319   59148 api_server.go:253] Checking apiserver healthz at https://192.168.72.97:8444/healthz ...
	I1101 01:01:23.879515   59148 api_server.go:279] https://192.168.72.97:8444/healthz returned 200:
	ok
	I1101 01:01:23.892376   59148 api_server.go:141] control plane version: v1.28.3
	I1101 01:01:23.892412   59148 api_server.go:131] duration metric: took 5.927178892s to wait for apiserver health ...
	I1101 01:01:23.892424   59148 cni.go:84] Creating CNI manager for ""
	I1101 01:01:23.892433   59148 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:01:23.894577   59148 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:01:23.896163   59148 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:01:23.928482   59148 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:01:23.952485   59148 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:01:23.968054   59148 system_pods.go:59] 8 kube-system pods found
	I1101 01:01:23.968095   59148 system_pods.go:61] "coredns-5dd5756b68-lmxx8" [c74c5ddc-56a8-422c-a140-1fdd14ef817d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 01:01:23.968115   59148 system_pods.go:61] "etcd-default-k8s-diff-port-639310" [1baf2571-f6c6-43bc-8051-e72f7eb4ed70] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 01:01:23.968126   59148 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-639310" [9cbc66c6-7c66-4b24-9400-a5add2edec14] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 01:01:23.968145   59148 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-639310" [99945be6-6fb8-4da6-8c6a-c25a2023d2d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 01:01:23.968158   59148 system_pods.go:61] "kube-proxy-f45wg" [abe74c94-5140-4c35-a141-d995652948e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 01:01:23.968167   59148 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-639310" [299c1962-1945-4525-90c7-384d515dc4e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 01:01:23.968188   59148 system_pods.go:61] "metrics-server-57f55c9bc5-6szl7" [1e00ef03-d5f4-4e8b-a247-8c31a5492f9e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:01:23.968201   59148 system_pods.go:61] "storage-provisioner" [fe2e7631-0564-44d2-afbd-578fb37f6a04] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 01:01:23.968215   59148 system_pods.go:74] duration metric: took 15.694719ms to wait for pod list to return data ...
	I1101 01:01:23.968224   59148 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:01:23.972141   59148 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:01:23.972177   59148 node_conditions.go:123] node cpu capacity is 2
	I1101 01:01:23.972191   59148 node_conditions.go:105] duration metric: took 3.96106ms to run NodePressure ...
	I1101 01:01:23.972214   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:24.253558   59148 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1101 01:01:24.258842   59148 kubeadm.go:787] kubelet initialised
	I1101 01:01:24.258869   59148 kubeadm.go:788] duration metric: took 5.283339ms waiting for restarted kubelet to initialise ...
	I1101 01:01:24.258878   59148 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:01:24.265507   59148 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-lmxx8" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:24.271381   59148 pod_ready.go:97] node "default-k8s-diff-port-639310" hosting pod "coredns-5dd5756b68-lmxx8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.271408   59148 pod_ready.go:81] duration metric: took 5.876802ms waiting for pod "coredns-5dd5756b68-lmxx8" in "kube-system" namespace to be "Ready" ...
	E1101 01:01:24.271418   59148 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-639310" hosting pod "coredns-5dd5756b68-lmxx8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.271426   59148 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:24.277446   59148 pod_ready.go:97] node "default-k8s-diff-port-639310" hosting pod "etcd-default-k8s-diff-port-639310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.277476   59148 pod_ready.go:81] duration metric: took 6.04229ms waiting for pod "etcd-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	E1101 01:01:24.277487   59148 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-639310" hosting pod "etcd-default-k8s-diff-port-639310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.277495   59148 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:24.283557   59148 pod_ready.go:97] node "default-k8s-diff-port-639310" hosting pod "kube-apiserver-default-k8s-diff-port-639310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.283604   59148 pod_ready.go:81] duration metric: took 6.094277ms waiting for pod "kube-apiserver-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	E1101 01:01:24.283617   59148 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-639310" hosting pod "kube-apiserver-default-k8s-diff-port-639310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.283630   59148 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:24.357249   59148 pod_ready.go:97] node "default-k8s-diff-port-639310" hosting pod "kube-controller-manager-default-k8s-diff-port-639310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.357288   59148 pod_ready.go:81] duration metric: took 73.64295ms waiting for pod "kube-controller-manager-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	E1101 01:01:24.357302   59148 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-639310" hosting pod "kube-controller-manager-default-k8s-diff-port-639310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.357319   59148 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f45wg" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:21.457919   58676 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0: (1.960002941s)
	I1101 01:01:21.457955   58676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I1101 01:01:21.458111   58676 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.3: (1.960074529s)
	I1101 01:01:21.458140   58676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.3 (exists)
	I1101 01:01:21.458152   58676 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.3: (1.960014372s)
	I1101 01:01:21.458176   58676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.3 (exists)
	I1101 01:01:21.458226   58676 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1: (1.960094366s)
	I1101 01:01:21.458252   58676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I1101 01:01:21.458267   58676 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.694021872s)
	I1101 01:01:21.458306   58676 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1101 01:01:21.458344   58676 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:01:21.458392   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:21.458644   58676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3: (1.960815533s)
	I1101 01:01:21.458659   58676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3 from cache
	I1101 01:01:21.458686   58676 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1101 01:01:21.458718   58676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1101 01:01:21.462463   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:01:23.757842   58676 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.295346464s)
	I1101 01:01:23.757911   58676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1101 01:01:23.757849   58676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3: (2.299099605s)
	I1101 01:01:23.757965   58676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3 from cache
	I1101 01:01:23.758006   58676 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I1101 01:01:23.758025   58676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1101 01:01:23.758040   58676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I1101 01:01:23.764726   58676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1101 01:01:24.732471   58823 retry.go:31] will retry after 9.584941607s: kubelet not initialised
	I1101 01:01:23.990727   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:26.489463   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:25.156181   59148 pod_ready.go:92] pod "kube-proxy-f45wg" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:25.156211   59148 pod_ready.go:81] duration metric: took 798.883976ms waiting for pod "kube-proxy-f45wg" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:25.156225   59148 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:27.476794   59148 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-639310" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:29.974327   59148 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-639310" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:29.974364   59148 pod_ready.go:81] duration metric: took 4.818128166s waiting for pod "kube-scheduler-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:29.974381   59148 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:28.990433   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:30.991378   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:32.004594   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:34.006695   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:31.399348   58676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (7.641283444s)
	I1101 01:01:31.399378   58676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I1101 01:01:31.399412   58676 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1101 01:01:31.399465   58676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1101 01:01:33.857323   58676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3: (2.45781579s)
	I1101 01:01:33.857356   58676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3 from cache
	I1101 01:01:33.857384   58676 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1101 01:01:33.857444   58676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1101 01:01:34.322788   58823 retry.go:31] will retry after 7.673111332s: kubelet not initialised
	I1101 01:01:33.488934   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:35.489417   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:37.989455   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:36.506432   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:39.004133   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:36.328716   58676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3: (2.471243195s)
	I1101 01:01:36.328755   58676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3 from cache
	I1101 01:01:36.328788   58676 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I1101 01:01:36.328839   58676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I1101 01:01:37.691820   58676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.362944664s)
	I1101 01:01:37.691851   58676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I1101 01:01:37.691877   58676 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1101 01:01:37.691978   58676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1101 01:01:38.442125   58676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1101 01:01:38.442181   58676 cache_images.go:123] Successfully loaded all cached images
	I1101 01:01:38.442188   58676 cache_images.go:92] LoadImages completed in 19.55115042s
	I1101 01:01:38.442260   58676 ssh_runner.go:195] Run: crio config
	I1101 01:01:38.499778   58676 cni.go:84] Creating CNI manager for ""
	I1101 01:01:38.499799   58676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:01:38.499820   58676 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 01:01:38.499846   58676 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.140 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-008483 NodeName:no-preload-008483 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.140"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.140 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 01:01:38.500007   58676 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.140
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-008483"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.140
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.140"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 01:01:38.500076   58676 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-008483 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:no-preload-008483 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 01:01:38.500135   58676 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 01:01:38.510073   58676 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 01:01:38.510160   58676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 01:01:38.517853   58676 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1101 01:01:38.534085   58676 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 01:01:38.549312   58676 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I1101 01:01:38.566320   58676 ssh_runner.go:195] Run: grep 192.168.50.140	control-plane.minikube.internal$ /etc/hosts
	I1101 01:01:38.569923   58676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.140	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:01:38.582147   58676 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483 for IP: 192.168.50.140
	I1101 01:01:38.582180   58676 certs.go:190] acquiring lock for shared ca certs: {Name:mk6185b3938c4b51e80f57b7c81926c81b632b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:01:38.582353   58676 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key
	I1101 01:01:38.582412   58676 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key
	I1101 01:01:38.582512   58676 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/client.key
	I1101 01:01:38.582596   58676 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/apiserver.key.306fa7af
	I1101 01:01:38.582664   58676 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/proxy-client.key
	I1101 01:01:38.582841   58676 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem (1338 bytes)
	W1101 01:01:38.582887   58676 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504_empty.pem, impossibly tiny 0 bytes
	I1101 01:01:38.582903   58676 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 01:01:38.582941   58676 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem (1078 bytes)
	I1101 01:01:38.582978   58676 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem (1123 bytes)
	I1101 01:01:38.583015   58676 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem (1675 bytes)
	I1101 01:01:38.583082   58676 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:01:38.583827   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 01:01:38.607306   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 01:01:38.631666   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 01:01:38.655201   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 01:01:38.678237   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 01:01:38.700410   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 01:01:38.726807   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 01:01:38.752672   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 01:01:38.776285   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 01:01:38.799902   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem --> /usr/share/ca-certificates/14504.pem (1338 bytes)
	I1101 01:01:38.823790   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /usr/share/ca-certificates/145042.pem (1708 bytes)
	I1101 01:01:38.847407   58676 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 01:01:38.863594   58676 ssh_runner.go:195] Run: openssl version
	I1101 01:01:38.869214   58676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14504.pem && ln -fs /usr/share/ca-certificates/14504.pem /etc/ssl/certs/14504.pem"
	I1101 01:01:38.878725   58676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14504.pem
	I1101 01:01:38.883007   58676 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:54 /usr/share/ca-certificates/14504.pem
	I1101 01:01:38.883069   58676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14504.pem
	I1101 01:01:38.888251   58676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14504.pem /etc/ssl/certs/51391683.0"
	I1101 01:01:38.899894   58676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145042.pem && ln -fs /usr/share/ca-certificates/145042.pem /etc/ssl/certs/145042.pem"
	I1101 01:01:38.909658   58676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145042.pem
	I1101 01:01:38.914011   58676 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:54 /usr/share/ca-certificates/145042.pem
	I1101 01:01:38.914088   58676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145042.pem
	I1101 01:01:38.919323   58676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145042.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 01:01:38.928836   58676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 01:01:38.937988   58676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:01:38.943540   58676 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:01:38.943607   58676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:01:38.949543   58676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 01:01:38.959098   58676 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 01:01:38.963149   58676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 01:01:38.968868   58676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 01:01:38.974315   58676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 01:01:38.979746   58676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 01:01:38.985852   58676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 01:01:38.991864   58676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 01:01:38.998153   58676 kubeadm.go:404] StartCluster: {Name:no-preload-008483 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:no-preload-008483 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.140 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 01:01:38.998271   58676 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 01:01:38.998340   58676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:01:39.045797   58676 cri.go:89] found id: ""
	I1101 01:01:39.045870   58676 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 01:01:39.056166   58676 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1101 01:01:39.056197   58676 kubeadm.go:636] restartCluster start
	I1101 01:01:39.056252   58676 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 01:01:39.065191   58676 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:39.066337   58676 kubeconfig.go:92] found "no-preload-008483" server: "https://192.168.50.140:8443"
	I1101 01:01:39.068843   58676 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 01:01:39.077558   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:39.077606   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:39.088105   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:39.088123   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:39.088168   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:39.100203   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:39.600957   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:39.601029   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:39.612652   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:40.101101   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:40.101191   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:40.113249   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:40.600487   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:40.600552   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:40.612183   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:42.002176   58823 kubeadm.go:787] kubelet initialised
	I1101 01:01:42.002198   58823 kubeadm.go:788] duration metric: took 34.582278796s waiting for restarted kubelet to initialise ...
	I1101 01:01:42.002211   58823 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:01:42.007691   58823 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-m8mn8" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.012657   58823 pod_ready.go:92] pod "coredns-5644d7b6d9-m8mn8" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:42.012677   58823 pod_ready.go:81] duration metric: took 4.961011ms waiting for pod "coredns-5644d7b6d9-m8mn8" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.012687   58823 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-swhtm" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.017099   58823 pod_ready.go:92] pod "coredns-5644d7b6d9-swhtm" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:42.017124   58823 pod_ready.go:81] duration metric: took 4.429709ms waiting for pod "coredns-5644d7b6d9-swhtm" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.017137   58823 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.021376   58823 pod_ready.go:92] pod "etcd-old-k8s-version-330042" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:42.021403   58823 pod_ready.go:81] duration metric: took 4.25772ms waiting for pod "etcd-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.021415   58823 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.026057   58823 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-330042" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:42.026080   58823 pod_ready.go:81] duration metric: took 4.65685ms waiting for pod "kube-apiserver-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.026096   58823 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.401057   58823 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-330042" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:42.401085   58823 pod_ready.go:81] duration metric: took 374.980275ms waiting for pod "kube-controller-manager-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.401099   58823 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-h86m8" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:40.487876   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:42.488609   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:41.504485   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:44.005180   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:41.100662   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:41.100773   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:41.113339   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:41.601121   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:41.601195   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:41.613986   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:42.101110   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:42.101188   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:42.113963   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:42.600356   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:42.600458   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:42.612154   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:43.100679   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:43.100767   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:43.113009   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:43.601328   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:43.601402   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:43.612862   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:44.101146   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:44.101261   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:44.113407   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:44.600812   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:44.600955   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:44.613161   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:45.100665   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:45.100769   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:45.112905   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:45.600416   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:45.600515   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:45.612930   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:42.801878   58823 pod_ready.go:92] pod "kube-proxy-h86m8" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:42.801899   58823 pod_ready.go:81] duration metric: took 400.793617ms waiting for pod "kube-proxy-h86m8" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.801907   58823 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:43.201586   58823 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-330042" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:43.201618   58823 pod_ready.go:81] duration metric: took 399.702904ms waiting for pod "kube-scheduler-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:43.201632   58823 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:45.508037   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:44.489092   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:46.493162   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:46.506251   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:49.004539   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:46.100957   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:46.101023   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:46.113645   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:46.600681   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:46.600781   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:46.612564   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:47.101090   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:47.101156   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:47.113500   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:47.601105   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:47.601244   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:47.613091   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:48.100608   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:48.100725   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:48.112995   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:48.600520   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:48.600603   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:48.612240   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:49.077973   58676 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1101 01:01:49.078017   58676 kubeadm.go:1128] stopping kube-system containers ...
	I1101 01:01:49.078031   58676 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 01:01:49.078097   58676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:01:49.117615   58676 cri.go:89] found id: ""
	I1101 01:01:49.117689   58676 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 01:01:49.133583   58676 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:01:49.142851   58676 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:01:49.142922   58676 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:01:49.151952   58676 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1101 01:01:49.151973   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:49.270827   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:50.046638   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:50.252510   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:50.327660   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:50.398419   58676 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:01:50.398511   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:50.415262   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:50.931672   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:47.508466   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:49.509032   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:51.510816   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:48.987561   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:50.989519   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:52.989978   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:51.004704   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:53.006138   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:51.431168   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:51.931127   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:52.431292   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:52.462617   58676 api_server.go:72] duration metric: took 2.064198698s to wait for apiserver process to appear ...
	I1101 01:01:52.462644   58676 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:01:52.462658   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:52.463297   58676 api_server.go:269] stopped: https://192.168.50.140:8443/healthz: Get "https://192.168.50.140:8443/healthz": dial tcp 192.168.50.140:8443: connect: connection refused
	I1101 01:01:52.463360   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:52.463831   58676 api_server.go:269] stopped: https://192.168.50.140:8443/healthz: Get "https://192.168.50.140:8443/healthz": dial tcp 192.168.50.140:8443: connect: connection refused
	I1101 01:01:52.964290   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:54.007720   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:56.012280   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:56.353340   58676 api_server.go:279] https://192.168.50.140:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 01:01:56.353399   58676 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 01:01:56.353416   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:56.404133   58676 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:01:56.404176   58676 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:01:56.464272   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:56.470496   58676 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:01:56.470553   58676 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:01:56.964058   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:56.975831   58676 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:01:56.975877   58676 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:01:57.464038   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:57.472652   58676 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:01:57.472697   58676 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:01:57.964020   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:57.970866   58676 api_server.go:279] https://192.168.50.140:8443/healthz returned 200:
	ok
	I1101 01:01:57.979612   58676 api_server.go:141] control plane version: v1.28.3
	I1101 01:01:57.979641   58676 api_server.go:131] duration metric: took 5.516990946s to wait for apiserver health ...
	I1101 01:01:57.979650   58676 cni.go:84] Creating CNI manager for ""
	I1101 01:01:57.979657   58676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:01:57.981694   58676 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:01:54.990377   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:57.489817   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:55.505767   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:57.505977   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:00.004800   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:57.983198   58676 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:01:58.006916   58676 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:01:58.035969   58676 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:01:58.047783   58676 system_pods.go:59] 8 kube-system pods found
	I1101 01:01:58.047833   58676 system_pods.go:61] "coredns-5dd5756b68-kcjf2" [e5cba8fe-f5c0-48cd-a21b-649caf4405cd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 01:01:58.047848   58676 system_pods.go:61] "etcd-no-preload-008483" [6e8ce64d-5c27-4528-9ecb-4bd1c2ab55c9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 01:01:58.047868   58676 system_pods.go:61] "kube-apiserver-no-preload-008483" [c320b03e-f364-4b38-8f09-5239d66f90e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 01:01:58.047881   58676 system_pods.go:61] "kube-controller-manager-no-preload-008483" [b89beee3-61e6-4efa-926f-43ae6a50e44b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 01:01:58.047893   58676 system_pods.go:61] "kube-proxy-xjfsj" [a7195683-b9ee-440c-82e6-efcd325a35e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 01:01:58.047907   58676 system_pods.go:61] "kube-scheduler-no-preload-008483" [d8c6a1f5-ceca-46af-9a40-22053f5387b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 01:01:58.047920   58676 system_pods.go:61] "metrics-server-57f55c9bc5-49wtw" [b87d5491-9981-48d5-9cf8-34dbd4b24435] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:01:58.047946   58676 system_pods.go:61] "storage-provisioner" [bf9d5910-ae5f-48f9-9358-54b2068c2e2c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 01:01:58.047959   58676 system_pods.go:74] duration metric: took 11.96541ms to wait for pod list to return data ...
	I1101 01:01:58.047971   58676 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:01:58.052170   58676 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:01:58.052205   58676 node_conditions.go:123] node cpu capacity is 2
	I1101 01:01:58.052218   58676 node_conditions.go:105] duration metric: took 4.239786ms to run NodePressure ...
	I1101 01:01:58.052237   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:58.340580   58676 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1101 01:01:58.351480   58676 kubeadm.go:787] kubelet initialised
	I1101 01:01:58.351510   58676 kubeadm.go:788] duration metric: took 10.903426ms waiting for restarted kubelet to initialise ...
	I1101 01:01:58.351520   58676 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:01:58.359099   58676 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-kcjf2" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:00.383123   58676 pod_ready.go:102] pod "coredns-5dd5756b68-kcjf2" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:58.509858   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:01.009429   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:59.988392   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:01.989042   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:02.505009   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:05.004485   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:02.880623   58676 pod_ready.go:102] pod "coredns-5dd5756b68-kcjf2" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:04.878534   58676 pod_ready.go:92] pod "coredns-5dd5756b68-kcjf2" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:04.878556   58676 pod_ready.go:81] duration metric: took 6.519426334s waiting for pod "coredns-5dd5756b68-kcjf2" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:04.878565   58676 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:03.508377   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:05.508570   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:03.990099   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:06.488196   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:07.005182   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:09.505205   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:06.907992   58676 pod_ready.go:102] pod "etcd-no-preload-008483" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:09.400005   58676 pod_ready.go:102] pod "etcd-no-preload-008483" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:09.900354   58676 pod_ready.go:92] pod "etcd-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:09.900379   58676 pod_ready.go:81] duration metric: took 5.021808339s waiting for pod "etcd-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.900394   58676 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.906496   58676 pod_ready.go:92] pod "kube-apiserver-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:09.906520   58676 pod_ready.go:81] duration metric: took 6.117499ms waiting for pod "kube-apiserver-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.906532   58676 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.911830   58676 pod_ready.go:92] pod "kube-controller-manager-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:09.911850   58676 pod_ready.go:81] duration metric: took 5.311751ms waiting for pod "kube-controller-manager-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.911859   58676 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xjfsj" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.916419   58676 pod_ready.go:92] pod "kube-proxy-xjfsj" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:09.916442   58676 pod_ready.go:81] duration metric: took 4.576855ms waiting for pod "kube-proxy-xjfsj" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.916454   58676 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.921501   58676 pod_ready.go:92] pod "kube-scheduler-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:09.921525   58676 pod_ready.go:81] duration metric: took 5.064522ms waiting for pod "kube-scheduler-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.921536   58676 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:07.514883   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:10.008399   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:08.490011   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:10.988504   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:12.989076   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:11.507014   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:13.509053   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:12.205003   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:14.705621   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:12.509113   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:15.009543   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:15.487844   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:17.488178   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:16.003423   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:18.003597   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:20.004472   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:17.205434   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:19.214743   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:17.508997   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:20.008838   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:22.009023   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:19.488902   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:21.988210   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:22.004908   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:24.503394   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:21.704199   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:23.704855   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:25.705319   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:24.508980   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:27.008249   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:23.988985   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:26.489079   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:26.504752   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:28.505579   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:27.709065   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:30.205608   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:29.507299   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:31.509017   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:28.988567   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:31.488567   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:30.507770   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:33.005199   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:32.707783   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:35.206392   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:34.007977   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:36.008250   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:33.988120   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:36.489908   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:35.503482   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:37.504132   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:39.504348   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:37.704511   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:39.705791   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:38.008778   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:40.509040   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:38.987615   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:40.988646   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:42.005253   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:44.008492   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:42.206082   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:44.704875   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:43.009095   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:45.508557   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:43.489792   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:45.987971   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:47.989322   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:46.504096   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:49.004605   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:47.205736   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:49.704264   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:47.510014   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:50.009950   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:50.489334   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:52.987877   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:51.005543   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:53.504243   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:52.205173   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:54.704843   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:52.509247   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:55.009346   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:55.488330   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:57.987845   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:55.504494   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:58.003674   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:00.004598   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:57.205092   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:59.705637   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:57.522422   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:00.007902   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:02.009964   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:59.987956   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:01.989730   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:02.005953   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:04.007095   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:02.205761   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:04.704065   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:04.508531   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:06.512303   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:04.487667   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:06.487854   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:06.503630   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:08.504993   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:06.704568   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:08.705012   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:09.008519   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:11.509450   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:08.488843   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:10.987614   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:12.989824   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:10.505932   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:13.005799   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:11.203683   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:13.204241   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:15.705287   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:14.008244   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:16.009433   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:15.488278   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:17.988683   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:15.503739   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:17.506253   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:20.004613   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:18.204056   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:20.205312   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:18.009706   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:20.508744   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:20.490044   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:22.989002   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:22.504922   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:25.004156   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:22.704711   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:25.205072   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:23.008359   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:25.509196   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:25.487961   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:27.488324   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:27.008179   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:29.504182   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:27.205671   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:29.208402   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:27.509247   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:30.008627   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:29.988286   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:32.487504   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:31.504973   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:34.004168   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:31.704298   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:33.704452   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:32.507959   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:35.008631   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:37.009271   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:34.488458   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:36.488759   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:36.503146   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:38.504444   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:36.204750   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:38.705346   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:39.507406   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:41.509812   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:38.988439   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:41.491496   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:40.505301   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:42.506003   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:45.004872   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:41.204015   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:43.206055   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:45.705597   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:44.008441   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:46.009900   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:43.987813   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:45.988508   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:47.989201   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:47.505799   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:49.506424   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:48.204686   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:50.704155   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:48.511303   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:51.008360   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:50.488123   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:52.488356   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:52.004387   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:54.505016   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:52.705891   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:54.706732   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:53.008988   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:55.507791   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:54.988620   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:56.990186   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:57.005565   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:59.505220   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:57.205342   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:59.215160   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:57.508013   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:59.509883   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:01.510115   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:59.490512   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:01.988008   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:02.004869   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:04.503903   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:01.704963   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:04.204466   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:04.007146   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:06.007815   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:04.488270   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:06.987544   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:06.505818   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:09.006093   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:06.205560   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:08.703961   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:10.705037   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:08.008817   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:10.508585   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:08.988223   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:10.989742   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:12.990669   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:11.503914   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:13.504018   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:13.206290   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:15.704820   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:13.008696   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:15.010312   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:15.487596   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:17.489381   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:15.505665   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:18.004825   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:20.004966   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:18.205022   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:20.703582   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:17.508842   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:20.008489   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:22.008572   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:19.988378   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:22.490000   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:22.005055   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:24.504050   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:22.704263   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:24.704479   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:24.507893   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:27.009371   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:24.988500   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:27.490306   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:26.504850   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:29.003907   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:27.204442   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:29.204906   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:29.508234   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:31.508285   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:29.988549   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:32.490618   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:31.504800   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:33.506025   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:31.704974   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:34.204565   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:33.512784   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:36.009709   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:34.988579   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:37.491535   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:36.011080   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:38.503881   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:36.204772   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:38.205329   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:40.707128   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:38.509404   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:41.009915   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:39.988897   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:42.487751   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:40.504606   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:42.504912   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:44.505101   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:43.205005   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:45.207096   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:43.507714   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:45.508866   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:44.988852   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:47.488268   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:47.004069   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:49.005029   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:47.704762   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:49.705584   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:48.009495   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:50.508392   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:49.488880   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:51.988841   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:51.504680   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:54.010010   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:52.204554   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:54.705101   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:53.008194   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:55.008373   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:57.009351   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:54.489702   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:56.389066   58730 pod_ready.go:81] duration metric: took 4m0.000951404s waiting for pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace to be "Ready" ...
	E1101 01:04:56.389116   58730 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1101 01:04:56.389139   58730 pod_ready.go:38] duration metric: took 4m11.103640013s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:04:56.389173   58730 kubeadm.go:640] restartCluster took 4m34.207263569s
	W1101 01:04:56.389254   58730 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1101 01:04:56.389292   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1101 01:04:56.504421   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:58.505542   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:56.705911   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:58.706099   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:00.706478   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:59.509462   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:02.009472   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:00.509320   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:03.007708   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:03.203884   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:05.204356   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:04.009580   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:06.508160   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:05.505057   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:07.506811   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:10.004080   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:07.205229   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:09.206089   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:08.509319   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:11.009099   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:12.261608   58730 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (15.872291337s)
	I1101 01:05:12.261694   58730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:05:12.275334   58730 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:05:12.284969   58730 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:05:12.295834   58730 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:05:12.295881   58730 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1101 01:05:12.526039   58730 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 01:05:12.005261   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:14.005683   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:11.706864   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:14.204758   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:13.508597   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:16.008784   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:16.506282   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:19.004037   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:16.205361   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:18.704890   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:18.008878   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:20.009861   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:23.201664   58730 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1101 01:05:23.201785   58730 kubeadm.go:322] [preflight] Running pre-flight checks
	I1101 01:05:23.201920   58730 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 01:05:23.202057   58730 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 01:05:23.202178   58730 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 01:05:23.202255   58730 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 01:05:23.204179   58730 out.go:204]   - Generating certificates and keys ...
	I1101 01:05:23.204304   58730 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1101 01:05:23.204384   58730 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1101 01:05:23.204480   58730 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1101 01:05:23.204557   58730 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1101 01:05:23.204639   58730 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1101 01:05:23.204715   58730 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1101 01:05:23.204792   58730 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1101 01:05:23.204884   58730 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1101 01:05:23.205007   58730 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1101 01:05:23.205133   58730 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1101 01:05:23.205195   58730 kubeadm.go:322] [certs] Using the existing "sa" key
	I1101 01:05:23.205273   58730 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 01:05:23.205332   58730 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 01:05:23.205391   58730 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 01:05:23.205461   58730 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 01:05:23.205550   58730 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 01:05:23.205656   58730 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 01:05:23.205734   58730 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 01:05:23.207792   58730 out.go:204]   - Booting up control plane ...
	I1101 01:05:23.207914   58730 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 01:05:23.208028   58730 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 01:05:23.208124   58730 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 01:05:23.208244   58730 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 01:05:23.208322   58730 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 01:05:23.208356   58730 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1101 01:05:23.208496   58730 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 01:05:23.208569   58730 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003034 seconds
	I1101 01:05:23.208662   58730 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 01:05:23.208762   58730 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 01:05:23.208840   58730 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 01:05:23.209055   58730 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-754132 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 01:05:23.209148   58730 kubeadm.go:322] [bootstrap-token] Using token: j0j8ab.rja1mh5j9krst0k4
	I1101 01:05:23.210755   58730 out.go:204]   - Configuring RBAC rules ...
	I1101 01:05:23.210895   58730 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 01:05:23.211001   58730 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 01:05:23.211205   58730 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 01:05:23.211369   58730 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 01:05:23.211509   58730 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 01:05:23.211617   58730 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 01:05:23.211776   58730 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 01:05:23.211851   58730 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1101 01:05:23.211894   58730 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1101 01:05:23.211901   58730 kubeadm.go:322] 
	I1101 01:05:23.211985   58730 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1101 01:05:23.211992   58730 kubeadm.go:322] 
	I1101 01:05:23.212076   58730 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1101 01:05:23.212085   58730 kubeadm.go:322] 
	I1101 01:05:23.212128   58730 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1101 01:05:23.212205   58730 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 01:05:23.212256   58730 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 01:05:23.212263   58730 kubeadm.go:322] 
	I1101 01:05:23.212305   58730 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1101 01:05:23.212314   58730 kubeadm.go:322] 
	I1101 01:05:23.212352   58730 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 01:05:23.212359   58730 kubeadm.go:322] 
	I1101 01:05:23.212400   58730 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1101 01:05:23.212461   58730 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 01:05:23.212568   58730 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 01:05:23.212584   58730 kubeadm.go:322] 
	I1101 01:05:23.212699   58730 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 01:05:23.212787   58730 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1101 01:05:23.212797   58730 kubeadm.go:322] 
	I1101 01:05:23.212862   58730 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token j0j8ab.rja1mh5j9krst0k4 \
	I1101 01:05:23.212943   58730 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 \
	I1101 01:05:23.212962   58730 kubeadm.go:322] 	--control-plane 
	I1101 01:05:23.212968   58730 kubeadm.go:322] 
	I1101 01:05:23.213083   58730 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1101 01:05:23.213093   58730 kubeadm.go:322] 
	I1101 01:05:23.213202   58730 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token j0j8ab.rja1mh5j9krst0k4 \
	I1101 01:05:23.213346   58730 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 
	I1101 01:05:23.213366   58730 cni.go:84] Creating CNI manager for ""
	I1101 01:05:23.213375   58730 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:05:23.215058   58730 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:05:23.216515   58730 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:05:23.251532   58730 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:05:21.007674   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:23.505067   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:21.204745   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:23.206316   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:25.211036   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:22.507158   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:24.508157   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:26.508990   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:23.291112   58730 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 01:05:23.291192   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:23.291224   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9 minikube.k8s.io/name=embed-certs-754132 minikube.k8s.io/updated_at=2023_11_01T01_05_23_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:23.452410   58730 ops.go:34] apiserver oom_adj: -16
	I1101 01:05:23.635798   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:23.754993   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:24.350830   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:24.850468   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:25.350887   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:25.850719   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:26.350946   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:26.850869   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:27.350851   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:27.850856   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:25.507083   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:27.511273   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:29.974545   59148 pod_ready.go:81] duration metric: took 4m0.000148043s waiting for pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace to be "Ready" ...
	E1101 01:05:29.974585   59148 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1101 01:05:29.974607   59148 pod_ready.go:38] duration metric: took 4m5.715718658s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:05:29.974652   59148 kubeadm.go:640] restartCluster took 4m26.139306333s
	W1101 01:05:29.974746   59148 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1101 01:05:29.974779   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1101 01:05:27.704338   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:30.205751   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:29.008649   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:31.009235   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:28.350920   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:28.850670   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:29.350172   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:29.850241   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:30.351225   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:30.851276   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:31.350289   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:31.850999   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:32.350874   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:32.850500   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:32.708147   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:35.205568   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:33.351023   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:33.851109   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:34.351257   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:34.850212   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:35.350277   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:35.850281   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:36.350770   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:36.456508   58730 kubeadm.go:1081] duration metric: took 13.165385995s to wait for elevateKubeSystemPrivileges.
	I1101 01:05:36.456550   58730 kubeadm.go:406] StartCluster complete in 5m14.31984828s
	I1101 01:05:36.456575   58730 settings.go:142] acquiring lock: {Name:mk7f269e64dfd8d176737f993e01f6e6badafbc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:05:36.456674   58730 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 01:05:36.458488   58730 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/kubeconfig: {Name:mk08da65b6c71084e1cfafb19800038e8c8303e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:05:36.458789   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 01:05:36.458936   58730 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1101 01:05:36.459029   58730 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-754132"
	I1101 01:05:36.459061   58730 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-754132"
	W1101 01:05:36.459076   58730 addons.go:240] addon storage-provisioner should already be in state true
	I1101 01:05:36.459086   58730 addons.go:69] Setting metrics-server=true in profile "embed-certs-754132"
	I1101 01:05:36.459124   58730 addons.go:231] Setting addon metrics-server=true in "embed-certs-754132"
	I1101 01:05:36.459134   58730 host.go:66] Checking if "embed-certs-754132" exists ...
	I1101 01:05:36.459060   58730 config.go:182] Loaded profile config "embed-certs-754132": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:05:36.459062   58730 addons.go:69] Setting default-storageclass=true in profile "embed-certs-754132"
	I1101 01:05:36.459219   58730 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-754132"
	W1101 01:05:36.459138   58730 addons.go:240] addon metrics-server should already be in state true
	I1101 01:05:36.459347   58730 host.go:66] Checking if "embed-certs-754132" exists ...
	I1101 01:05:36.459588   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:05:36.459633   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:05:36.459638   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:05:36.459674   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:05:36.459689   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:05:36.459713   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:05:36.477136   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40825
	I1101 01:05:36.477207   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37895
	I1101 01:05:36.477706   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46261
	I1101 01:05:36.477874   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:05:36.477889   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:05:36.478086   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:05:36.478388   58730 main.go:141] libmachine: Using API Version  1
	I1101 01:05:36.478405   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:05:36.478540   58730 main.go:141] libmachine: Using API Version  1
	I1101 01:05:36.478561   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:05:36.478601   58730 main.go:141] libmachine: Using API Version  1
	I1101 01:05:36.478622   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:05:36.478794   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:05:36.478990   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:05:36.479037   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:05:36.479219   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetState
	I1101 01:05:36.479379   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:05:36.479412   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:05:36.479587   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:05:36.479623   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:05:36.483272   58730 addons.go:231] Setting addon default-storageclass=true in "embed-certs-754132"
	W1101 01:05:36.483295   58730 addons.go:240] addon default-storageclass should already be in state true
	I1101 01:05:36.483318   58730 host.go:66] Checking if "embed-certs-754132" exists ...
	I1101 01:05:36.483665   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:05:36.483696   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:05:36.498137   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46727
	I1101 01:05:36.498148   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37157
	I1101 01:05:36.498530   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:05:36.499000   58730 main.go:141] libmachine: Using API Version  1
	I1101 01:05:36.499024   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:05:36.499329   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:05:36.499499   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetState
	I1101 01:05:36.501223   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:05:36.503752   58730 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:05:36.505580   58730 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:05:36.505600   58730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 01:05:36.505617   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:05:36.505756   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37761
	I1101 01:05:36.506307   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:05:36.506765   58730 main.go:141] libmachine: Using API Version  1
	I1101 01:05:36.506783   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:05:36.507257   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:05:36.507303   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:05:36.507766   58730 main.go:141] libmachine: Using API Version  1
	I1101 01:05:36.507786   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:05:36.507852   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:05:36.507894   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:05:36.508136   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:05:36.508296   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetState
	I1101 01:05:36.509982   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:05:36.512303   58730 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1101 01:05:36.512065   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:05:36.513712   58730 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 01:05:36.513728   58730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 01:05:36.513749   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:05:36.512082   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:05:36.513819   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:05:36.513839   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:05:36.516632   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:05:36.516867   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:05:36.517052   58730 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa Username:docker}
	I1101 01:05:36.517489   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:05:36.518036   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:05:36.518058   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:05:36.518360   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:05:36.519431   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:05:36.519602   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:05:36.519742   58730 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa Username:docker}
	I1101 01:05:36.526881   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35481
	I1101 01:05:36.527462   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:05:36.527889   58730 main.go:141] libmachine: Using API Version  1
	I1101 01:05:36.527902   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:05:36.528341   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:05:36.528511   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetState
	I1101 01:05:36.530250   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:05:36.530539   58730 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 01:05:36.530557   58730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 01:05:36.530575   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:05:36.533671   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:05:36.534068   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:05:36.534093   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:05:36.534368   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:05:36.534596   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:05:36.534741   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:05:36.534821   58730 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa Username:docker}
	I1101 01:05:36.559098   58730 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-754132" context rescaled to 1 replicas
	I1101 01:05:36.559135   58730 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.83 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 01:05:36.561061   58730 out.go:177] * Verifying Kubernetes components...
	I1101 01:05:33.009726   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:35.507972   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:36.562382   58730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:05:36.684098   58730 node_ready.go:35] waiting up to 6m0s for node "embed-certs-754132" to be "Ready" ...
	I1101 01:05:36.684219   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 01:05:36.689836   58730 node_ready.go:49] node "embed-certs-754132" has status "Ready":"True"
	I1101 01:05:36.689863   58730 node_ready.go:38] duration metric: took 5.731179ms waiting for node "embed-certs-754132" to be "Ready" ...
	I1101 01:05:36.689875   58730 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:05:36.707509   58730 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:05:36.743671   58730 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 01:05:36.743702   58730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1101 01:05:36.764886   58730 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:36.773895   58730 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 01:05:36.810064   58730 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 01:05:36.810095   58730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 01:05:36.888833   58730 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:05:36.888854   58730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 01:05:36.892836   58730 pod_ready.go:92] pod "etcd-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:05:36.892864   58730 pod_ready.go:81] duration metric: took 127.938482ms waiting for pod "etcd-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:36.892879   58730 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:36.968554   58730 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:05:36.978210   58730 pod_ready.go:92] pod "kube-apiserver-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:05:36.978239   58730 pod_ready.go:81] duration metric: took 85.351942ms waiting for pod "kube-apiserver-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:36.978254   58730 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:37.154956   58730 pod_ready.go:92] pod "kube-controller-manager-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:05:37.154983   58730 pod_ready.go:81] duration metric: took 176.720364ms waiting for pod "kube-controller-manager-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:37.154997   58730 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cwbfz" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:38.405267   58730 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.720993157s)
	I1101 01:05:38.405302   58730 start.go:926] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1101 01:05:38.840834   58730 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.133283925s)
	I1101 01:05:38.840891   58730 main.go:141] libmachine: Making call to close driver server
	I1101 01:05:38.840906   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Close
	I1101 01:05:38.840918   58730 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.066970508s)
	I1101 01:05:38.841048   58730 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.872463156s)
	I1101 01:05:38.841085   58730 main.go:141] libmachine: Making call to close driver server
	I1101 01:05:38.841098   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Close
	I1101 01:05:38.841320   58730 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:05:38.841370   58730 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:05:38.841373   58730 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:05:38.841328   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Closing plugin on server side
	I1101 01:05:38.841400   58730 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:05:38.841412   58730 main.go:141] libmachine: Making call to close driver server
	I1101 01:05:38.841426   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Close
	I1101 01:05:38.841390   58730 main.go:141] libmachine: Making call to close driver server
	I1101 01:05:38.841442   58730 main.go:141] libmachine: Making call to close driver server
	I1101 01:05:38.841454   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Close
	I1101 01:05:38.841457   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Close
	I1101 01:05:38.841354   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Closing plugin on server side
	I1101 01:05:38.844717   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Closing plugin on server side
	I1101 01:05:38.844730   58730 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:05:38.844723   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Closing plugin on server side
	I1101 01:05:38.844744   58730 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:05:38.844753   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Closing plugin on server side
	I1101 01:05:38.844757   58730 addons.go:467] Verifying addon metrics-server=true in "embed-certs-754132"
	I1101 01:05:38.844763   58730 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:05:38.844774   58730 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:05:38.844773   58730 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:05:38.844789   58730 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:05:38.844799   58730 main.go:141] libmachine: Making call to close driver server
	I1101 01:05:38.844808   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Close
	I1101 01:05:38.845059   58730 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:05:38.845077   58730 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:05:38.845092   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Closing plugin on server side
	I1101 01:05:38.890752   58730 main.go:141] libmachine: Making call to close driver server
	I1101 01:05:38.890785   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Close
	I1101 01:05:38.891075   58730 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:05:38.891095   58730 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:05:38.891108   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Closing plugin on server side
	I1101 01:05:38.892878   58730 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I1101 01:05:37.706877   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:39.707206   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:38.894405   58730 addons.go:502] enable addons completed in 2.435477984s: enabled=[metrics-server storage-provisioner default-storageclass]
	I1101 01:05:39.279100   58730 pod_ready.go:102] pod "kube-proxy-cwbfz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:40.775597   58730 pod_ready.go:92] pod "kube-proxy-cwbfz" in "kube-system" namespace has status "Ready":"True"
	I1101 01:05:40.775622   58730 pod_ready.go:81] duration metric: took 3.620618998s waiting for pod "kube-proxy-cwbfz" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:40.775644   58730 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:40.782773   58730 pod_ready.go:92] pod "kube-scheduler-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:05:40.782796   58730 pod_ready.go:81] duration metric: took 7.145643ms waiting for pod "kube-scheduler-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:40.782806   58730 pod_ready.go:38] duration metric: took 4.092919772s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:05:40.782821   58730 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:05:40.782868   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:05:40.811977   58730 api_server.go:72] duration metric: took 4.252812827s to wait for apiserver process to appear ...
	I1101 01:05:40.812000   58730 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:05:40.812017   58730 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8443/healthz ...
	I1101 01:05:40.817524   58730 api_server.go:279] https://192.168.61.83:8443/healthz returned 200:
	ok
	I1101 01:05:40.819599   58730 api_server.go:141] control plane version: v1.28.3
	I1101 01:05:40.819625   58730 api_server.go:131] duration metric: took 7.617418ms to wait for apiserver health ...
	I1101 01:05:40.819636   58730 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:05:40.826677   58730 system_pods.go:59] 8 kube-system pods found
	I1101 01:05:40.826714   58730 system_pods.go:61] "coredns-5dd5756b68-6kqbc" [e03e6370-35d1-4438-8b18-d62b0a253ea6] Running
	I1101 01:05:40.826722   58730 system_pods.go:61] "etcd-embed-certs-754132" [2cd8789c-8ba8-47ea-82f2-e461cbc9d3b3] Running
	I1101 01:05:40.826729   58730 system_pods.go:61] "kube-apiserver-embed-certs-754132" [81bd13a3-37ea-4bf6-9eb9-e66318137a21] Running
	I1101 01:05:40.826735   58730 system_pods.go:61] "kube-controller-manager-embed-certs-754132" [6aa18435-1990-479b-b975-7ac1d794d967] Running
	I1101 01:05:40.826742   58730 system_pods.go:61] "kube-proxy-cwbfz" [b7f5ba1e-bd63-456b-94cc-0e2c121b7792] Running
	I1101 01:05:40.826748   58730 system_pods.go:61] "kube-scheduler-embed-certs-754132" [64203f31-7c41-42d0-9d6b-bc63e1b423cc] Running
	I1101 01:05:40.826758   58730 system_pods.go:61] "metrics-server-57f55c9bc5-499xs" [617aecda-f132-4358-9da9-bbc4fc625da0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:05:40.826773   58730 system_pods.go:61] "storage-provisioner" [7feb8931-83d0-4968-a295-a4202e8fc8c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 01:05:40.826786   58730 system_pods.go:74] duration metric: took 7.142747ms to wait for pod list to return data ...
	I1101 01:05:40.826799   58730 default_sa.go:34] waiting for default service account to be created ...
	I1101 01:05:40.831268   58730 default_sa.go:45] found service account: "default"
	I1101 01:05:40.831295   58730 default_sa.go:55] duration metric: took 4.485602ms for default service account to be created ...
	I1101 01:05:40.831309   58730 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 01:05:40.891306   58730 system_pods.go:86] 8 kube-system pods found
	I1101 01:05:40.891335   58730 system_pods.go:89] "coredns-5dd5756b68-6kqbc" [e03e6370-35d1-4438-8b18-d62b0a253ea6] Running
	I1101 01:05:40.891341   58730 system_pods.go:89] "etcd-embed-certs-754132" [2cd8789c-8ba8-47ea-82f2-e461cbc9d3b3] Running
	I1101 01:05:40.891346   58730 system_pods.go:89] "kube-apiserver-embed-certs-754132" [81bd13a3-37ea-4bf6-9eb9-e66318137a21] Running
	I1101 01:05:40.891350   58730 system_pods.go:89] "kube-controller-manager-embed-certs-754132" [6aa18435-1990-479b-b975-7ac1d794d967] Running
	I1101 01:05:40.891354   58730 system_pods.go:89] "kube-proxy-cwbfz" [b7f5ba1e-bd63-456b-94cc-0e2c121b7792] Running
	I1101 01:05:40.891358   58730 system_pods.go:89] "kube-scheduler-embed-certs-754132" [64203f31-7c41-42d0-9d6b-bc63e1b423cc] Running
	I1101 01:05:40.891366   58730 system_pods.go:89] "metrics-server-57f55c9bc5-499xs" [617aecda-f132-4358-9da9-bbc4fc625da0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:05:40.891373   58730 system_pods.go:89] "storage-provisioner" [7feb8931-83d0-4968-a295-a4202e8fc8c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 01:05:40.891381   58730 system_pods.go:126] duration metric: took 60.065984ms to wait for k8s-apps to be running ...
	I1101 01:05:40.891391   58730 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 01:05:40.891436   58730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:05:40.906845   58730 system_svc.go:56] duration metric: took 15.443235ms WaitForService to wait for kubelet.
	I1101 01:05:40.906875   58730 kubeadm.go:581] duration metric: took 4.347718478s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1101 01:05:40.906895   58730 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:05:41.089628   58730 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:05:41.089654   58730 node_conditions.go:123] node cpu capacity is 2
	I1101 01:05:41.089664   58730 node_conditions.go:105] duration metric: took 182.764311ms to run NodePressure ...
	I1101 01:05:41.089674   58730 start.go:228] waiting for startup goroutines ...
	I1101 01:05:41.089680   58730 start.go:233] waiting for cluster config update ...
	I1101 01:05:41.089693   58730 start.go:242] writing updated cluster config ...
	I1101 01:05:41.089950   58730 ssh_runner.go:195] Run: rm -f paused
	I1101 01:05:41.140594   58730 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1101 01:05:41.143142   58730 out.go:177] * Done! kubectl is now configured to use "embed-certs-754132" cluster and "default" namespace by default
	I1101 01:05:37.516552   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:40.009373   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:43.882201   59148 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.907397495s)
	I1101 01:05:43.882275   59148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:05:43.897793   59148 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:05:43.908350   59148 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:05:43.919013   59148 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:05:43.919066   59148 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1101 01:05:43.992534   59148 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1101 01:05:43.992653   59148 kubeadm.go:322] [preflight] Running pre-flight checks
	I1101 01:05:44.162750   59148 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 01:05:44.162906   59148 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 01:05:44.163052   59148 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 01:05:44.398016   59148 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 01:05:44.399998   59148 out.go:204]   - Generating certificates and keys ...
	I1101 01:05:44.400102   59148 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1101 01:05:44.400189   59148 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1101 01:05:44.400334   59148 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1101 01:05:44.400431   59148 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1101 01:05:44.400526   59148 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1101 01:05:44.400602   59148 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1101 01:05:44.400736   59148 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1101 01:05:44.400821   59148 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1101 01:05:44.401336   59148 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1101 01:05:44.401936   59148 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1101 01:05:44.402420   59148 kubeadm.go:322] [certs] Using the existing "sa" key
	I1101 01:05:44.402515   59148 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 01:05:44.470807   59148 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 01:05:44.642677   59148 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 01:05:44.768991   59148 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 01:05:45.052817   59148 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 01:05:45.053698   59148 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 01:05:45.056339   59148 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 01:05:42.204108   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:44.205679   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:42.508073   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:43.201762   58823 pod_ready.go:81] duration metric: took 4m0.000100455s waiting for pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace to be "Ready" ...
	E1101 01:05:43.201795   58823 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1101 01:05:43.201816   58823 pod_ready.go:38] duration metric: took 4m1.199592624s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:05:43.201848   58823 kubeadm.go:640] restartCluster took 4m57.555406731s
	W1101 01:05:43.201899   58823 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1101 01:05:43.201920   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1101 01:05:45.058304   59148 out.go:204]   - Booting up control plane ...
	I1101 01:05:45.058434   59148 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 01:05:45.058565   59148 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 01:05:45.060937   59148 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 01:05:45.078776   59148 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 01:05:45.079692   59148 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 01:05:45.079771   59148 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1101 01:05:45.204880   59148 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 01:05:46.208575   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:48.705698   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:50.708163   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:48.240337   58823 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.038387523s)
	I1101 01:05:48.240417   58823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:05:48.257585   58823 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:05:48.266949   58823 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:05:48.277302   58823 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:05:48.277346   58823 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1101 01:05:48.514394   58823 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 01:05:54.708746   59148 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.503354 seconds
	I1101 01:05:54.708894   59148 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 01:05:54.726194   59148 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 01:05:55.266392   59148 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 01:05:55.266670   59148 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-639310 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 01:05:55.783906   59148 kubeadm.go:322] [bootstrap-token] Using token: ilpx6n.m6vs8mqxrjuf2w8f
	I1101 01:05:53.205312   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:55.206016   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:55.786231   59148 out.go:204]   - Configuring RBAC rules ...
	I1101 01:05:55.786370   59148 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 01:05:55.793682   59148 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 01:05:55.812319   59148 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 01:05:55.819324   59148 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 01:05:55.825785   59148 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 01:05:55.831793   59148 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 01:05:55.858443   59148 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 01:05:56.195472   59148 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1101 01:05:56.248405   59148 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1101 01:05:56.249655   59148 kubeadm.go:322] 
	I1101 01:05:56.249745   59148 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1101 01:05:56.249759   59148 kubeadm.go:322] 
	I1101 01:05:56.249852   59148 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1101 01:05:56.249869   59148 kubeadm.go:322] 
	I1101 01:05:56.249931   59148 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1101 01:05:56.249992   59148 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 01:05:56.250076   59148 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 01:05:56.250088   59148 kubeadm.go:322] 
	I1101 01:05:56.250163   59148 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1101 01:05:56.250172   59148 kubeadm.go:322] 
	I1101 01:05:56.250261   59148 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 01:05:56.250281   59148 kubeadm.go:322] 
	I1101 01:05:56.250344   59148 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1101 01:05:56.250436   59148 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 01:05:56.250560   59148 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 01:05:56.250574   59148 kubeadm.go:322] 
	I1101 01:05:56.250663   59148 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 01:05:56.250757   59148 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1101 01:05:56.250769   59148 kubeadm.go:322] 
	I1101 01:05:56.250881   59148 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token ilpx6n.m6vs8mqxrjuf2w8f \
	I1101 01:05:56.251011   59148 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 \
	I1101 01:05:56.251041   59148 kubeadm.go:322] 	--control-plane 
	I1101 01:05:56.251053   59148 kubeadm.go:322] 
	I1101 01:05:56.251150   59148 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1101 01:05:56.251162   59148 kubeadm.go:322] 
	I1101 01:05:56.251259   59148 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token ilpx6n.m6vs8mqxrjuf2w8f \
	I1101 01:05:56.251383   59148 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 
	I1101 01:05:56.251922   59148 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 01:05:56.251982   59148 cni.go:84] Creating CNI manager for ""
	I1101 01:05:56.252008   59148 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:05:56.254247   59148 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:05:56.256068   59148 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:05:56.281994   59148 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:05:56.324660   59148 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 01:05:56.324796   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:56.324863   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9 minikube.k8s.io/name=default-k8s-diff-port-639310 minikube.k8s.io/updated_at=2023_11_01T01_05_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:56.739064   59148 ops.go:34] apiserver oom_adj: -16
	I1101 01:05:56.739245   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:56.834852   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:57.432044   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:57.931920   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:58.432414   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:58.932871   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:59.432755   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:59.932515   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:57.704234   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:59.705516   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:01.231970   58823 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1101 01:06:01.232062   58823 kubeadm.go:322] [preflight] Running pre-flight checks
	I1101 01:06:01.232156   58823 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 01:06:01.232289   58823 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 01:06:01.232419   58823 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 01:06:01.232595   58823 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 01:06:01.232714   58823 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 01:06:01.232790   58823 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1101 01:06:01.232890   58823 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 01:06:01.235429   58823 out.go:204]   - Generating certificates and keys ...
	I1101 01:06:01.235533   58823 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1101 01:06:01.235606   58823 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1101 01:06:01.235675   58823 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1101 01:06:01.235782   58823 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1101 01:06:01.235889   58823 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1101 01:06:01.235973   58823 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1101 01:06:01.236065   58823 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1101 01:06:01.236153   58823 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1101 01:06:01.236263   58823 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1101 01:06:01.236383   58823 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1101 01:06:01.236447   58823 kubeadm.go:322] [certs] Using the existing "sa" key
	I1101 01:06:01.236528   58823 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 01:06:01.236607   58823 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 01:06:01.236728   58823 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 01:06:01.236811   58823 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 01:06:01.236877   58823 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 01:06:01.236955   58823 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 01:06:01.238699   58823 out.go:204]   - Booting up control plane ...
	I1101 01:06:01.238808   58823 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 01:06:01.238904   58823 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 01:06:01.238990   58823 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 01:06:01.239092   58823 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 01:06:01.239289   58823 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 01:06:01.239387   58823 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.004023 seconds
	I1101 01:06:01.239528   58823 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 01:06:01.239741   58823 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 01:06:01.239796   58823 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 01:06:01.239971   58823 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-330042 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1101 01:06:01.240056   58823 kubeadm.go:322] [bootstrap-token] Using token: lseik6.3ozwuciianl7vrri
	I1101 01:06:01.241690   58823 out.go:204]   - Configuring RBAC rules ...
	I1101 01:06:01.241825   58823 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 01:06:01.242015   58823 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 01:06:01.242170   58823 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 01:06:01.242265   58823 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 01:06:01.242380   58823 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 01:06:01.242448   58823 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1101 01:06:01.242517   58823 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1101 01:06:01.242549   58823 kubeadm.go:322] 
	I1101 01:06:01.242631   58823 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1101 01:06:01.242646   58823 kubeadm.go:322] 
	I1101 01:06:01.242753   58823 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1101 01:06:01.242764   58823 kubeadm.go:322] 
	I1101 01:06:01.242801   58823 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1101 01:06:01.242883   58823 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 01:06:01.242956   58823 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 01:06:01.242965   58823 kubeadm.go:322] 
	I1101 01:06:01.243041   58823 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1101 01:06:01.243152   58823 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 01:06:01.243249   58823 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 01:06:01.243261   58823 kubeadm.go:322] 
	I1101 01:06:01.243357   58823 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1101 01:06:01.243421   58823 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1101 01:06:01.243425   58823 kubeadm.go:322] 
	I1101 01:06:01.243490   58823 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token lseik6.3ozwuciianl7vrri \
	I1101 01:06:01.243597   58823 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 \
	I1101 01:06:01.243619   58823 kubeadm.go:322]     --control-plane 	  
	I1101 01:06:01.243623   58823 kubeadm.go:322] 
	I1101 01:06:01.243697   58823 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1101 01:06:01.243702   58823 kubeadm.go:322] 
	I1101 01:06:01.243773   58823 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token lseik6.3ozwuciianl7vrri \
	I1101 01:06:01.243923   58823 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 
	I1101 01:06:01.243967   58823 cni.go:84] Creating CNI manager for ""
	I1101 01:06:01.243979   58823 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:06:01.246766   58823 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:06:01.248244   58823 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:06:01.274713   58823 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:06:01.299087   58823 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 01:06:01.299184   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:01.299241   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9 minikube.k8s.io/name=old-k8s-version-330042 minikube.k8s.io/updated_at=2023_11_01T01_06_01_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:01.350480   58823 ops.go:34] apiserver oom_adj: -16
	I1101 01:06:01.668212   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:01.795923   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:02.398955   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:00.432038   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:00.932486   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:01.431924   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:01.932050   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:02.432828   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:02.932070   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:03.432833   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:03.931826   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:04.432522   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:04.932660   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:01.705717   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:04.205431   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:02.899285   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:03.398507   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:03.898445   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:04.399301   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:04.898647   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:05.399211   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:05.899099   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:06.398426   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:06.898703   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:07.399266   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:05.431880   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:05.932001   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:06.432804   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:06.932744   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:07.432405   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:07.932531   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:08.432007   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:08.560694   59148 kubeadm.go:1081] duration metric: took 12.235943971s to wait for elevateKubeSystemPrivileges.
	I1101 01:06:08.560733   59148 kubeadm.go:406] StartCluster complete in 5m4.77698433s
	I1101 01:06:08.560756   59148 settings.go:142] acquiring lock: {Name:mk7f269e64dfd8d176737f993e01f6e6badafbc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:06:08.560862   59148 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 01:06:08.563346   59148 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/kubeconfig: {Name:mk08da65b6c71084e1cfafb19800038e8c8303e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:06:08.563655   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 01:06:08.563793   59148 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1101 01:06:08.563857   59148 config.go:182] Loaded profile config "default-k8s-diff-port-639310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:06:08.563874   59148 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-639310"
	I1101 01:06:08.563892   59148 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-639310"
	I1101 01:06:08.563905   59148 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-639310"
	I1101 01:06:08.563917   59148 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-639310"
	I1101 01:06:08.563950   59148 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-639310"
	I1101 01:06:08.563899   59148 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-639310"
	W1101 01:06:08.563962   59148 addons.go:240] addon metrics-server should already be in state true
	W1101 01:06:08.563990   59148 addons.go:240] addon storage-provisioner should already be in state true
	I1101 01:06:08.564025   59148 host.go:66] Checking if "default-k8s-diff-port-639310" exists ...
	I1101 01:06:08.564064   59148 host.go:66] Checking if "default-k8s-diff-port-639310" exists ...
	I1101 01:06:08.564369   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:08.564404   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:08.564421   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:08.564453   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:08.564455   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:08.564488   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:08.581714   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37509
	I1101 01:06:08.582180   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:08.583081   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35137
	I1101 01:06:08.583312   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:06:08.583332   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:08.583553   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41541
	I1101 01:06:08.583702   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:08.583714   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:08.583891   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:08.584174   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:06:08.584200   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:08.584272   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:08.584302   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:08.584638   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:06:08.584687   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:08.584737   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:08.584993   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:08.585152   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetState
	I1101 01:06:08.585215   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:08.585256   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:08.588703   59148 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-639310"
	W1101 01:06:08.588728   59148 addons.go:240] addon default-storageclass should already be in state true
	I1101 01:06:08.588754   59148 host.go:66] Checking if "default-k8s-diff-port-639310" exists ...
	I1101 01:06:08.589158   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:08.589193   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:08.600826   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40787
	I1101 01:06:08.601314   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:08.601952   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:06:08.601976   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:08.602335   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:08.602560   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetState
	I1101 01:06:08.603276   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35887
	I1101 01:06:08.603415   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36765
	I1101 01:06:08.603803   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:08.604098   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:08.604276   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:06:08.604290   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:08.604490   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:06:08.604506   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:08.604573   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:06:08.604778   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:08.606338   59148 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:06:08.605001   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:08.605380   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:08.607632   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:08.607705   59148 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:06:08.607717   59148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 01:06:08.607731   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:06:08.607995   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetState
	I1101 01:06:08.610424   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:06:08.612025   59148 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1101 01:06:08.613346   59148 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 01:06:08.613365   59148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 01:06:08.613386   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:06:08.611304   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:06:08.611864   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:06:08.613461   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:06:08.613508   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:06:08.613650   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:06:08.613769   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:06:08.613869   59148 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa Username:docker}
	I1101 01:06:08.618717   59148 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-639310" context rescaled to 1 replicas
	I1101 01:06:08.618755   59148 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.97 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 01:06:08.620291   59148 out.go:177] * Verifying Kubernetes components...
	I1101 01:06:08.618896   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:06:08.620048   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:06:08.621662   59148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:06:08.621747   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:06:08.621777   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:06:08.622129   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:06:08.622359   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:06:08.622526   59148 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa Username:docker}
	I1101 01:06:08.629241   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42169
	I1101 01:06:08.629773   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:08.630164   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:06:08.630181   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:08.630428   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:08.630558   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetState
	I1101 01:06:08.631892   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:06:08.632176   59148 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 01:06:08.632197   59148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 01:06:08.632216   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:06:08.634872   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:06:08.635211   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:06:08.635241   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:06:08.635375   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:06:08.635576   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:06:08.635713   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:06:08.635839   59148 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa Username:docker}
	I1101 01:06:08.984005   59148 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 01:06:08.984032   59148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1101 01:06:08.991838   59148 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-639310" to be "Ready" ...
	I1101 01:06:08.991921   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 01:06:09.011096   59148 node_ready.go:49] node "default-k8s-diff-port-639310" has status "Ready":"True"
	I1101 01:06:09.011124   59148 node_ready.go:38] duration metric: took 19.250763ms waiting for node "default-k8s-diff-port-639310" to be "Ready" ...
	I1101 01:06:09.011136   59148 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:06:09.043526   59148 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:09.071032   59148 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 01:06:09.071065   59148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 01:06:09.089683   59148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 01:06:09.090332   59148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:06:09.139676   59148 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:06:09.139702   59148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 01:06:09.219436   59148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:06:06.705499   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:09.207584   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:09.922465   58676 pod_ready.go:81] duration metric: took 4m0.000913678s waiting for pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace to be "Ready" ...
	E1101 01:06:09.922511   58676 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1101 01:06:09.922529   58676 pod_ready.go:38] duration metric: took 4m11.570999497s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:06:09.922566   58676 kubeadm.go:640] restartCluster took 4m30.866358786s
	W1101 01:06:09.922644   58676 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1101 01:06:09.922688   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1101 01:06:11.075881   59148 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.083916099s)
	I1101 01:06:11.075915   59148 start.go:926] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1101 01:06:11.075946   59148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.986221728s)
	I1101 01:06:11.075997   59148 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:11.076012   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Close
	I1101 01:06:11.076348   59148 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:11.076367   59148 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:11.076377   59148 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:11.076386   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Close
	I1101 01:06:11.076620   59148 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:11.076639   59148 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:11.119713   59148 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:11.119741   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Close
	I1101 01:06:11.120145   59148 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:11.120170   59148 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:11.120145   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | Closing plugin on server side
	I1101 01:06:11.172242   59148 pod_ready.go:102] pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:11.954880   59148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.864508967s)
	I1101 01:06:11.954945   59148 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:11.954960   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Close
	I1101 01:06:11.955014   59148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.735537793s)
	I1101 01:06:11.955074   59148 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:11.955088   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Close
	I1101 01:06:11.955379   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | Closing plugin on server side
	I1101 01:06:11.955411   59148 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:11.955418   59148 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:11.955429   59148 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:11.955438   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Close
	I1101 01:06:11.957487   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | Closing plugin on server side
	I1101 01:06:11.957532   59148 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:11.957549   59148 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:11.957537   59148 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:11.957612   59148 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:11.957566   59148 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-639310"
	I1101 01:06:11.957643   59148 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:11.957672   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Close
	I1101 01:06:11.958036   59148 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:11.958063   59148 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:11.960489   59148 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I1101 01:06:07.899402   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:08.398731   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:08.898547   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:09.399015   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:09.898437   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:10.399024   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:10.899108   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:11.398482   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:11.898943   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:12.399022   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:11.962129   59148 addons.go:502] enable addons completed in 3.39833009s: enabled=[default-storageclass metrics-server storage-provisioner]
	I1101 01:06:13.684297   59148 pod_ready.go:102] pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:12.899212   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:13.398415   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:13.898444   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:14.398630   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:14.898427   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:15.399212   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:15.898869   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:16.399289   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:16.588122   58823 kubeadm.go:1081] duration metric: took 15.28901357s to wait for elevateKubeSystemPrivileges.
	I1101 01:06:16.588166   58823 kubeadm.go:406] StartCluster complete in 5m31.002121514s
	I1101 01:06:16.588190   58823 settings.go:142] acquiring lock: {Name:mk7f269e64dfd8d176737f993e01f6e6badafbc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:06:16.588290   58823 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 01:06:16.590925   58823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/kubeconfig: {Name:mk08da65b6c71084e1cfafb19800038e8c8303e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:06:16.591235   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 01:06:16.591339   58823 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1101 01:06:16.591416   58823 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-330042"
	I1101 01:06:16.591436   58823 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-330042"
	W1101 01:06:16.591444   58823 addons.go:240] addon storage-provisioner should already be in state true
	I1101 01:06:16.591477   58823 config.go:182] Loaded profile config "old-k8s-version-330042": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1101 01:06:16.591517   58823 host.go:66] Checking if "old-k8s-version-330042" exists ...
	I1101 01:06:16.591525   58823 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-330042"
	I1101 01:06:16.591541   58823 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-330042"
	I1101 01:06:16.591923   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:16.591924   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:16.591962   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:16.591980   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:16.592045   58823 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-330042"
	I1101 01:06:16.592064   58823 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-330042"
	W1101 01:06:16.592071   58823 addons.go:240] addon metrics-server should already be in state true
	I1101 01:06:16.592104   58823 host.go:66] Checking if "old-k8s-version-330042" exists ...
	I1101 01:06:16.592424   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:16.592468   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:16.610602   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35459
	I1101 01:06:16.611188   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:16.611722   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:06:16.611752   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:16.611893   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35425
	I1101 01:06:16.612233   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:16.612315   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:16.612802   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:16.612841   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:16.613196   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:06:16.613215   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:16.613550   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:16.613571   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39319
	I1101 01:06:16.613949   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:16.614126   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:16.614159   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:16.614425   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:06:16.614438   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:16.614811   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:16.614997   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetState
	I1101 01:06:16.617747   58823 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-330042"
	W1101 01:06:16.617763   58823 addons.go:240] addon default-storageclass should already be in state true
	I1101 01:06:16.617783   58823 host.go:66] Checking if "old-k8s-version-330042" exists ...
	I1101 01:06:16.618021   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:16.618044   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:16.633877   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37903
	I1101 01:06:16.634227   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34049
	I1101 01:06:16.634436   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:16.635052   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:16.635225   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:06:16.635251   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:16.635588   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:06:16.635603   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:16.635656   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:16.636032   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:16.636092   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetState
	I1101 01:06:16.636310   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetState
	I1101 01:06:16.637897   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:06:16.640069   58823 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:06:16.638479   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:06:16.640887   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35501
	I1101 01:06:16.641511   58823 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:06:16.641523   58823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 01:06:16.641540   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:06:16.642477   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:16.643099   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:06:16.643115   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:16.643826   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:16.644397   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:16.644432   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:16.644515   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:06:16.644534   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:06:16.644549   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:06:16.644743   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:06:16.644908   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:06:16.645006   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:06:16.645102   58823 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa Username:docker}
	I1101 01:06:16.648901   58823 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1101 01:06:16.650287   58823 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 01:06:16.650299   58823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 01:06:16.650316   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:06:16.654323   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:06:16.654694   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:06:16.654720   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:06:16.655020   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:06:16.655268   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:06:16.655450   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:06:16.655600   58823 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa Username:docker}
	I1101 01:06:16.663888   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32991
	I1101 01:06:16.664490   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:16.665023   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:06:16.665049   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:16.665533   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:16.665720   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetState
	I1101 01:06:16.667516   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:06:16.667817   58823 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 01:06:16.667837   58823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 01:06:16.667856   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:06:16.670789   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:06:16.671306   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:06:16.671332   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:06:16.671519   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:06:16.671688   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:06:16.671811   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:06:16.671974   58823 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa Username:docker}
	I1101 01:06:16.738145   58823 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-330042" context rescaled to 1 replicas
	I1101 01:06:16.738193   58823 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 01:06:16.740269   58823 out.go:177] * Verifying Kubernetes components...
	I1101 01:06:16.741889   58823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:06:16.827316   58823 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 01:06:16.827347   58823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1101 01:06:16.846888   58823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:06:16.868760   58823 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-330042" to be "Ready" ...
	I1101 01:06:16.868848   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 01:06:16.885920   58823 node_ready.go:49] node "old-k8s-version-330042" has status "Ready":"True"
	I1101 01:06:16.885962   58823 node_ready.go:38] duration metric: took 17.171382ms waiting for node "old-k8s-version-330042" to be "Ready" ...
	I1101 01:06:16.885975   58823 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:06:16.907070   58823 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-v2xlz" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:16.929166   58823 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 01:06:16.929190   58823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 01:06:16.946209   58823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 01:06:17.010599   58823 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:06:17.010628   58823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 01:06:17.132054   58823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:06:17.868039   58823 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1101 01:06:17.868039   58823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.021104248s)
	I1101 01:06:17.868120   58823 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:17.868126   58823 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:17.868140   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Close
	I1101 01:06:17.868142   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Close
	I1101 01:06:17.870315   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Closing plugin on server side
	I1101 01:06:17.870338   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Closing plugin on server side
	I1101 01:06:17.870352   58823 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:17.870364   58823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:17.870378   58823 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:17.870400   58823 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:17.870429   58823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:17.870439   58823 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:17.870448   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Close
	I1101 01:06:17.870470   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Close
	I1101 01:06:17.870865   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Closing plugin on server side
	I1101 01:06:17.870866   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Closing plugin on server side
	I1101 01:06:17.870876   58823 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:17.870890   58823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:17.870899   58823 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:17.870915   58823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:17.920542   58823 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:17.920570   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Close
	I1101 01:06:17.920923   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Closing plugin on server side
	I1101 01:06:17.920969   58823 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:17.920980   58823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:18.189030   58823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.056928538s)
	I1101 01:06:18.189096   58823 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:18.189109   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Close
	I1101 01:06:18.189446   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Closing plugin on server side
	I1101 01:06:18.189464   58823 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:18.189476   58823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:18.189486   58823 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:18.189506   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Close
	I1101 01:06:18.189735   58823 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:18.189752   58823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:18.189760   58823 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-330042"
	I1101 01:06:18.192103   58823 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1101 01:06:16.156689   59148 pod_ready.go:102] pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:18.158318   59148 pod_ready.go:102] pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:18.194035   58823 addons.go:502] enable addons completed in 1.602699312s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1101 01:06:18.978162   58823 pod_ready.go:102] pod "coredns-5644d7b6d9-v2xlz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:21.456448   58823 pod_ready.go:102] pod "coredns-5644d7b6d9-v2xlz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:20.657398   59148 pod_ready.go:102] pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:22.156680   59148 pod_ready.go:97] pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.72.97 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-11-01 01:06:08 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerSt
ateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-11-01 01:06:11 +0000 UTC,FinishedAt:2023-11-01 01:06:21 +0000 UTC,ContainerID:cri-o://1ecc4b16207e32548d5d59a4bb7a01519d7e5eaf75b05171abd6c8c635656811,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://1ecc4b16207e32548d5d59a4bb7a01519d7e5eaf75b05171abd6c8c635656811 Started:0xc002af16c0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1101 01:06:22.156709   59148 pod_ready.go:81] duration metric: took 13.113156669s waiting for pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace to be "Ready" ...
	E1101 01:06:22.156718   59148 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.72.97 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-11-01 01:06:08 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Runnin
g:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-11-01 01:06:11 +0000 UTC,FinishedAt:2023-11-01 01:06:21 +0000 UTC,ContainerID:cri-o://1ecc4b16207e32548d5d59a4bb7a01519d7e5eaf75b05171abd6c8c635656811,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://1ecc4b16207e32548d5d59a4bb7a01519d7e5eaf75b05171abd6c8c635656811 Started:0xc002af16c0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1101 01:06:22.156726   59148 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rgzt8" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.163387   59148 pod_ready.go:92] pod "coredns-5dd5756b68-rgzt8" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:22.163410   59148 pod_ready.go:81] duration metric: took 6.677078ms waiting for pod "coredns-5dd5756b68-rgzt8" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.163423   59148 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.168499   59148 pod_ready.go:92] pod "etcd-default-k8s-diff-port-639310" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:22.168519   59148 pod_ready.go:81] duration metric: took 5.088683ms waiting for pod "etcd-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.168528   59148 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.174117   59148 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-639310" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:22.174143   59148 pod_ready.go:81] duration metric: took 5.607251ms waiting for pod "kube-apiserver-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.174157   59148 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.179321   59148 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-639310" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:22.179344   59148 pod_ready.go:81] duration metric: took 5.178241ms waiting for pod "kube-controller-manager-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.179356   59148 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kzgzn" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.554016   59148 pod_ready.go:92] pod "kube-proxy-kzgzn" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:22.554047   59148 pod_ready.go:81] duration metric: took 374.683914ms waiting for pod "kube-proxy-kzgzn" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.554061   59148 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.954192   59148 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-639310" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:22.954216   59148 pod_ready.go:81] duration metric: took 400.146517ms waiting for pod "kube-scheduler-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.954226   59148 pod_ready.go:38] duration metric: took 13.943077925s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:06:22.954243   59148 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:06:22.954294   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:06:22.970594   59148 api_server.go:72] duration metric: took 14.351804953s to wait for apiserver process to appear ...
	I1101 01:06:22.970621   59148 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:06:22.970638   59148 api_server.go:253] Checking apiserver healthz at https://192.168.72.97:8444/healthz ...
	I1101 01:06:22.976061   59148 api_server.go:279] https://192.168.72.97:8444/healthz returned 200:
	ok
	I1101 01:06:22.977368   59148 api_server.go:141] control plane version: v1.28.3
	I1101 01:06:22.977390   59148 api_server.go:131] duration metric: took 6.761145ms to wait for apiserver health ...
	I1101 01:06:22.977398   59148 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:06:23.156987   59148 system_pods.go:59] 8 kube-system pods found
	I1101 01:06:23.157014   59148 system_pods.go:61] "coredns-5dd5756b68-rgzt8" [6d136c6a-e0b2-44c3-a17b-85649d6ff7b7] Running
	I1101 01:06:23.157018   59148 system_pods.go:61] "etcd-default-k8s-diff-port-639310" [9cc2eba7-c72f-4a6f-9c55-8cce5586b574] Running
	I1101 01:06:23.157024   59148 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-639310" [e2b16d1e-af9f-452e-8243-5267f781ab19] Running
	I1101 01:06:23.157028   59148 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-639310" [9173e21f-a13f-4234-94a1-1976881ee23d] Running
	I1101 01:06:23.157034   59148 system_pods.go:61] "kube-proxy-kzgzn" [32d59980-f28a-482c-9aa8-8502915417f0] Running
	I1101 01:06:23.157038   59148 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-639310" [449df462-911a-4afa-8ca5-f9fccce9ecac] Running
	I1101 01:06:23.157046   59148 system_pods.go:61] "metrics-server-57f55c9bc5-65ph4" [4683706e-65f6-4845-a5ad-60da8cd20d8e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:23.157053   59148 system_pods.go:61] "storage-provisioner" [eaba9583-e564-4804-9cd3-2b4de36c85da] Running
	I1101 01:06:23.157060   59148 system_pods.go:74] duration metric: took 179.656649ms to wait for pod list to return data ...
	I1101 01:06:23.157067   59148 default_sa.go:34] waiting for default service account to be created ...
	I1101 01:06:23.352990   59148 default_sa.go:45] found service account: "default"
	I1101 01:06:23.353024   59148 default_sa.go:55] duration metric: took 195.950242ms for default service account to be created ...
	I1101 01:06:23.353034   59148 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 01:06:23.557472   59148 system_pods.go:86] 8 kube-system pods found
	I1101 01:06:23.557498   59148 system_pods.go:89] "coredns-5dd5756b68-rgzt8" [6d136c6a-e0b2-44c3-a17b-85649d6ff7b7] Running
	I1101 01:06:23.557505   59148 system_pods.go:89] "etcd-default-k8s-diff-port-639310" [9cc2eba7-c72f-4a6f-9c55-8cce5586b574] Running
	I1101 01:06:23.557512   59148 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-639310" [e2b16d1e-af9f-452e-8243-5267f781ab19] Running
	I1101 01:06:23.557518   59148 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-639310" [9173e21f-a13f-4234-94a1-1976881ee23d] Running
	I1101 01:06:23.557524   59148 system_pods.go:89] "kube-proxy-kzgzn" [32d59980-f28a-482c-9aa8-8502915417f0] Running
	I1101 01:06:23.557531   59148 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-639310" [449df462-911a-4afa-8ca5-f9fccce9ecac] Running
	I1101 01:06:23.557541   59148 system_pods.go:89] "metrics-server-57f55c9bc5-65ph4" [4683706e-65f6-4845-a5ad-60da8cd20d8e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:23.557554   59148 system_pods.go:89] "storage-provisioner" [eaba9583-e564-4804-9cd3-2b4de36c85da] Running
	I1101 01:06:23.557561   59148 system_pods.go:126] duration metric: took 204.522772ms to wait for k8s-apps to be running ...
	I1101 01:06:23.557571   59148 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 01:06:23.557614   59148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:06:23.572950   59148 system_svc.go:56] duration metric: took 15.367105ms WaitForService to wait for kubelet.
	I1101 01:06:23.572979   59148 kubeadm.go:581] duration metric: took 14.954198383s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1101 01:06:23.572995   59148 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:06:23.754816   59148 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:06:23.754852   59148 node_conditions.go:123] node cpu capacity is 2
	I1101 01:06:23.754865   59148 node_conditions.go:105] duration metric: took 181.864765ms to run NodePressure ...
	I1101 01:06:23.754879   59148 start.go:228] waiting for startup goroutines ...
	I1101 01:06:23.754887   59148 start.go:233] waiting for cluster config update ...
	I1101 01:06:23.754902   59148 start.go:242] writing updated cluster config ...
	I1101 01:06:23.755221   59148 ssh_runner.go:195] Run: rm -f paused
	I1101 01:06:23.805298   59148 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1101 01:06:23.807226   59148 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-639310" cluster and "default" namespace by default
	I1101 01:06:24.353352   58676 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.430634921s)
	I1101 01:06:24.353418   58676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:06:24.367115   58676 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:06:24.376272   58676 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:06:24.385067   58676 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:06:24.385105   58676 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1101 01:06:24.436586   58676 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1101 01:06:24.436698   58676 kubeadm.go:322] [preflight] Running pre-flight checks
	I1101 01:06:24.592267   58676 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 01:06:24.592409   58676 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 01:06:24.592529   58676 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 01:06:24.834834   58676 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 01:06:24.836680   58676 out.go:204]   - Generating certificates and keys ...
	I1101 01:06:24.836825   58676 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1101 01:06:24.836918   58676 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1101 01:06:24.837052   58676 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1101 01:06:24.837150   58676 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1101 01:06:24.837378   58676 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1101 01:06:24.838501   58676 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1101 01:06:24.838970   58676 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1101 01:06:24.839488   58676 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1101 01:06:24.840058   58676 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1101 01:06:24.840454   58676 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1101 01:06:24.840925   58676 kubeadm.go:322] [certs] Using the existing "sa" key
	I1101 01:06:24.841017   58676 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 01:06:25.117460   58676 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 01:06:25.218894   58676 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 01:06:25.319416   58676 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 01:06:25.555023   58676 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 01:06:25.555490   58676 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 01:06:25.558041   58676 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 01:06:25.559946   58676 out.go:204]   - Booting up control plane ...
	I1101 01:06:25.560090   58676 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 01:06:25.560212   58676 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 01:06:25.560321   58676 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 01:06:25.577307   58676 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 01:06:25.580427   58676 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 01:06:25.580508   58676 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1101 01:06:25.710362   58676 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 01:06:23.963710   58823 pod_ready.go:102] pod "coredns-5644d7b6d9-v2xlz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:26.455851   58823 pod_ready.go:92] pod "coredns-5644d7b6d9-v2xlz" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:26.455880   58823 pod_ready.go:81] duration metric: took 9.548782268s waiting for pod "coredns-5644d7b6d9-v2xlz" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:26.455889   58823 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hkl2m" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:26.461243   58823 pod_ready.go:92] pod "kube-proxy-hkl2m" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:26.461277   58823 pod_ready.go:81] duration metric: took 5.380815ms waiting for pod "kube-proxy-hkl2m" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:26.461289   58823 pod_ready.go:38] duration metric: took 9.575303239s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:06:26.461314   58823 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:06:26.461372   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:06:26.476212   58823 api_server.go:72] duration metric: took 9.737981323s to wait for apiserver process to appear ...
	I1101 01:06:26.476245   58823 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:06:26.476268   58823 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I1101 01:06:26.483060   58823 api_server.go:279] https://192.168.39.90:8443/healthz returned 200:
	ok
	I1101 01:06:26.484299   58823 api_server.go:141] control plane version: v1.16.0
	I1101 01:06:26.484328   58823 api_server.go:131] duration metric: took 8.074303ms to wait for apiserver health ...
	I1101 01:06:26.484342   58823 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:06:26.488710   58823 system_pods.go:59] 4 kube-system pods found
	I1101 01:06:26.488745   58823 system_pods.go:61] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:26.488753   58823 system_pods.go:61] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:26.488766   58823 system_pods.go:61] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:26.488775   58823 system_pods.go:61] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:26.488787   58823 system_pods.go:74] duration metric: took 4.438458ms to wait for pod list to return data ...
	I1101 01:06:26.488797   58823 default_sa.go:34] waiting for default service account to be created ...
	I1101 01:06:26.492513   58823 default_sa.go:45] found service account: "default"
	I1101 01:06:26.492543   58823 default_sa.go:55] duration metric: took 3.739583ms for default service account to be created ...
	I1101 01:06:26.492553   58823 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 01:06:26.496897   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:26.496924   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:26.496929   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:26.496936   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:26.496942   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:26.496956   58823 retry.go:31] will retry after 215.348005ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:26.718021   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:26.718055   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:26.718064   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:26.718080   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:26.718086   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:26.718103   58823 retry.go:31] will retry after 357.067185ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:27.080480   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:27.080519   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:27.080528   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:27.080539   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:27.080548   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:27.080572   58823 retry.go:31] will retry after 441.083478ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:27.528922   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:27.528955   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:27.528964   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:27.528975   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:27.528984   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:27.529008   58823 retry.go:31] will retry after 595.152055ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:28.129735   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:28.129760   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:28.129765   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:28.129772   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:28.129778   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:28.129794   58823 retry.go:31] will retry after 591.454083ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:28.726058   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:28.726089   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:28.726097   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:28.726108   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:28.726118   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:28.726142   58823 retry.go:31] will retry after 682.338416ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:29.414282   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:29.414311   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:29.414321   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:29.414330   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:29.414338   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:29.414356   58823 retry.go:31] will retry after 953.248535ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:30.373950   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:30.373989   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:30.373998   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:30.374017   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:30.374028   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:30.374048   58823 retry.go:31] will retry after 1.291166145s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:31.671462   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:31.671516   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:31.671526   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:31.671537   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:31.671546   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:31.671565   58823 retry.go:31] will retry after 1.413833897s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:33.713596   58676 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002646 seconds
	I1101 01:06:33.713733   58676 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 01:06:33.731994   58676 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 01:06:34.275298   58676 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 01:06:34.275497   58676 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-008483 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 01:06:34.792259   58676 kubeadm.go:322] [bootstrap-token] Using token: ft1765.cra2ecqpjz8r5s0a
	I1101 01:06:34.793944   58676 out.go:204]   - Configuring RBAC rules ...
	I1101 01:06:34.794105   58676 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 01:06:34.800902   58676 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 01:06:34.811310   58676 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 01:06:34.821309   58676 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 01:06:34.826523   58676 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 01:06:34.832305   58676 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 01:06:34.852131   58676 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 01:06:35.137771   58676 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1101 01:06:35.206006   58676 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1101 01:06:35.207223   58676 kubeadm.go:322] 
	I1101 01:06:35.207316   58676 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1101 01:06:35.207327   58676 kubeadm.go:322] 
	I1101 01:06:35.207404   58676 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1101 01:06:35.207413   58676 kubeadm.go:322] 
	I1101 01:06:35.207448   58676 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1101 01:06:35.207528   58676 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 01:06:35.207619   58676 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 01:06:35.207640   58676 kubeadm.go:322] 
	I1101 01:06:35.207703   58676 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1101 01:06:35.207722   58676 kubeadm.go:322] 
	I1101 01:06:35.207796   58676 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 01:06:35.207805   58676 kubeadm.go:322] 
	I1101 01:06:35.207878   58676 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1101 01:06:35.208001   58676 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 01:06:35.208102   58676 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 01:06:35.208111   58676 kubeadm.go:322] 
	I1101 01:06:35.208207   58676 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 01:06:35.208314   58676 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1101 01:06:35.208337   58676 kubeadm.go:322] 
	I1101 01:06:35.208459   58676 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ft1765.cra2ecqpjz8r5s0a \
	I1101 01:06:35.208636   58676 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 \
	I1101 01:06:35.208674   58676 kubeadm.go:322] 	--control-plane 
	I1101 01:06:35.208687   58676 kubeadm.go:322] 
	I1101 01:06:35.208812   58676 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1101 01:06:35.208823   58676 kubeadm.go:322] 
	I1101 01:06:35.208936   58676 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ft1765.cra2ecqpjz8r5s0a \
	I1101 01:06:35.209057   58676 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 
	I1101 01:06:35.209758   58676 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 01:06:35.209780   58676 cni.go:84] Creating CNI manager for ""
	I1101 01:06:35.209790   58676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:06:35.211735   58676 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:06:35.213123   58676 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:06:35.235025   58676 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:06:35.271015   58676 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 01:06:35.271092   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9 minikube.k8s.io/name=no-preload-008483 minikube.k8s.io/updated_at=2023_11_01T01_06_35_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:35.271099   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:35.305061   58676 ops.go:34] apiserver oom_adj: -16
	I1101 01:06:35.663339   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:35.805680   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:33.090990   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:33.091030   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:33.091038   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:33.091049   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:33.091060   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:33.091078   58823 retry.go:31] will retry after 2.252641435s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:35.350673   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:35.350703   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:35.350711   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:35.350722   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:35.350735   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:35.350753   58823 retry.go:31] will retry after 2.131984659s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:36.402770   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:36.902353   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:37.402763   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:37.902598   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:38.401883   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:38.902775   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:39.402062   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:39.902544   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:40.402350   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:40.901853   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:37.489100   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:37.489127   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:37.489132   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:37.489141   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:37.489151   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:37.489169   58823 retry.go:31] will retry after 3.273821759s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:40.767389   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:40.767409   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:40.767414   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:40.767421   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:40.767427   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:40.767441   58823 retry.go:31] will retry after 4.351278698s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:41.402632   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:41.901859   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:42.402379   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:42.902816   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:43.402503   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:43.902158   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:44.402562   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:44.901867   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:45.401852   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:45.902865   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:45.124108   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:45.124138   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:45.124147   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:45.124158   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:45.124166   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:45.124184   58823 retry.go:31] will retry after 4.53047058s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:46.402463   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:46.902480   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:47.402022   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:47.568628   58676 kubeadm.go:1081] duration metric: took 12.297606595s to wait for elevateKubeSystemPrivileges.
	I1101 01:06:47.568672   58676 kubeadm.go:406] StartCluster complete in 5m8.570526689s
	I1101 01:06:47.568696   58676 settings.go:142] acquiring lock: {Name:mk7f269e64dfd8d176737f993e01f6e6badafbc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:06:47.568787   58676 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 01:06:47.570839   58676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/kubeconfig: {Name:mk08da65b6c71084e1cfafb19800038e8c8303e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:06:47.571093   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 01:06:47.571207   58676 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1101 01:06:47.571281   58676 addons.go:69] Setting storage-provisioner=true in profile "no-preload-008483"
	I1101 01:06:47.571307   58676 addons.go:69] Setting metrics-server=true in profile "no-preload-008483"
	I1101 01:06:47.571329   58676 addons.go:231] Setting addon metrics-server=true in "no-preload-008483"
	I1101 01:06:47.571345   58676 config.go:182] Loaded profile config "no-preload-008483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:06:47.571360   58676 addons.go:69] Setting default-storageclass=true in profile "no-preload-008483"
	I1101 01:06:47.571369   58676 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-008483"
	W1101 01:06:47.571348   58676 addons.go:240] addon metrics-server should already be in state true
	I1101 01:06:47.571441   58676 host.go:66] Checking if "no-preload-008483" exists ...
	I1101 01:06:47.571312   58676 addons.go:231] Setting addon storage-provisioner=true in "no-preload-008483"
	W1101 01:06:47.571490   58676 addons.go:240] addon storage-provisioner should already be in state true
	I1101 01:06:47.571527   58676 host.go:66] Checking if "no-preload-008483" exists ...
	I1101 01:06:47.571816   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:47.571815   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:47.571873   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:47.571892   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:47.571873   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:47.572006   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:47.590259   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39063
	I1101 01:06:47.590724   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:47.591055   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39635
	I1101 01:06:47.591202   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:06:47.591220   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:47.591229   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46549
	I1101 01:06:47.591621   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:47.591707   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:47.591743   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:47.592428   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:47.592471   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:47.592794   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:06:47.592808   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:06:47.592822   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:47.592826   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:47.593236   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:47.593283   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:47.593437   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetState
	I1101 01:06:47.593927   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:47.593966   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:47.598345   58676 addons.go:231] Setting addon default-storageclass=true in "no-preload-008483"
	W1101 01:06:47.598381   58676 addons.go:240] addon default-storageclass should already be in state true
	I1101 01:06:47.598413   58676 host.go:66] Checking if "no-preload-008483" exists ...
	I1101 01:06:47.598819   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:47.598871   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:47.613965   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43751
	I1101 01:06:47.614004   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40855
	I1101 01:06:47.614542   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:47.614669   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:47.615105   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:06:47.615121   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:47.615151   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:06:47.615189   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:47.615476   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:47.615537   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:47.615690   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetState
	I1101 01:06:47.615767   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetState
	I1101 01:06:47.617847   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:06:47.620144   58676 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:06:47.618264   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45253
	I1101 01:06:47.618444   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:06:47.621319   58676 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-008483" context rescaled to 1 replicas
	I1101 01:06:47.621520   58676 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.140 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 01:06:47.623048   58676 out.go:177] * Verifying Kubernetes components...
	I1101 01:06:47.621641   58676 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:06:47.621894   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:47.625008   58676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 01:06:47.625024   58676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:06:47.626461   58676 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1101 01:06:47.628411   58676 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 01:06:47.628425   58676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 01:06:47.628439   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:06:47.626617   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:06:47.627063   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:06:47.628510   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:47.628907   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:47.629438   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:47.629480   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:47.631968   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:06:47.632175   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:06:47.632212   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:06:47.632315   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:06:47.632508   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:06:47.632679   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:06:47.632739   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:06:47.632795   58676 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa Username:docker}
	I1101 01:06:47.633383   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:06:47.633403   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:06:47.633427   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:06:47.633584   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:06:47.633708   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:06:47.633891   58676 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa Username:docker}
	I1101 01:06:47.650937   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41791
	I1101 01:06:47.651372   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:47.651921   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:06:47.651956   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:47.652322   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:47.652536   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetState
	I1101 01:06:47.654393   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:06:47.654706   58676 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 01:06:47.654721   58676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 01:06:47.654743   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:06:47.657743   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:06:47.658176   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:06:47.658204   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:06:47.658448   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:06:47.658673   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:06:47.658836   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:06:47.659008   58676 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa Username:docker}
	I1101 01:06:47.808648   58676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:06:47.837158   58676 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 01:06:47.837181   58676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1101 01:06:47.846004   58676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 01:06:47.882427   58676 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 01:06:47.882454   58676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 01:06:47.899419   58676 node_ready.go:35] waiting up to 6m0s for node "no-preload-008483" to be "Ready" ...
	I1101 01:06:47.899496   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 01:06:47.919788   58676 node_ready.go:49] node "no-preload-008483" has status "Ready":"True"
	I1101 01:06:47.919821   58676 node_ready.go:38] duration metric: took 20.370648ms waiting for node "no-preload-008483" to be "Ready" ...
	I1101 01:06:47.919836   58676 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:06:47.926205   58676 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:06:47.926232   58676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 01:06:47.930715   58676 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5tp9h" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:47.982413   58676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:06:49.813480   58676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.004790768s)
	I1101 01:06:49.813519   58676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.967476056s)
	I1101 01:06:49.813564   58676 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:49.813588   58676 main.go:141] libmachine: (no-preload-008483) Calling .Close
	I1101 01:06:49.813528   58676 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:49.813617   58676 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.914052615s)
	I1101 01:06:49.813634   58676 main.go:141] libmachine: (no-preload-008483) Calling .Close
	I1101 01:06:49.813643   58676 start.go:926] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1101 01:06:49.813924   58676 main.go:141] libmachine: (no-preload-008483) DBG | Closing plugin on server side
	I1101 01:06:49.813935   58676 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:49.813956   58676 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:49.813970   58676 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:49.813979   58676 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:49.813980   58676 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:49.813990   58676 main.go:141] libmachine: (no-preload-008483) Calling .Close
	I1101 01:06:49.813991   58676 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:49.814014   58676 main.go:141] libmachine: (no-preload-008483) Calling .Close
	I1101 01:06:49.814239   58676 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:49.814258   58676 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:49.814321   58676 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:49.814339   58676 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:49.857721   58676 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:49.857749   58676 main.go:141] libmachine: (no-preload-008483) Calling .Close
	I1101 01:06:49.858034   58676 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:49.858053   58676 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:50.026844   58676 pod_ready.go:97] error getting pod "coredns-5dd5756b68-5tp9h" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-5tp9h" not found
	I1101 01:06:50.026876   58676 pod_ready.go:81] duration metric: took 2.096134316s waiting for pod "coredns-5dd5756b68-5tp9h" in "kube-system" namespace to be "Ready" ...
	E1101 01:06:50.026888   58676 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-5tp9h" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-5tp9h" not found
	I1101 01:06:50.026898   58676 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-m8v7v" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:50.204452   58676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.22199218s)
	I1101 01:06:50.204543   58676 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:50.204561   58676 main.go:141] libmachine: (no-preload-008483) Calling .Close
	I1101 01:06:50.204896   58676 main.go:141] libmachine: (no-preload-008483) DBG | Closing plugin on server side
	I1101 01:06:50.204985   58676 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:50.205017   58676 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:50.205046   58676 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:50.205064   58676 main.go:141] libmachine: (no-preload-008483) Calling .Close
	I1101 01:06:50.205339   58676 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:50.205360   58676 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:50.205371   58676 addons.go:467] Verifying addon metrics-server=true in "no-preload-008483"
	I1101 01:06:50.205393   58676 main.go:141] libmachine: (no-preload-008483) DBG | Closing plugin on server side
	I1101 01:06:50.207552   58676 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1101 01:06:50.208879   58676 addons.go:502] enable addons completed in 2.637673191s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1101 01:06:49.663546   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:49.663578   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:49.663585   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:49.663595   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:49.663604   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:49.663623   58823 retry.go:31] will retry after 5.557220121s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:52.106184   58676 pod_ready.go:92] pod "coredns-5dd5756b68-m8v7v" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:52.106208   58676 pod_ready.go:81] duration metric: took 2.079304042s waiting for pod "coredns-5dd5756b68-m8v7v" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.106218   58676 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.112508   58676 pod_ready.go:92] pod "etcd-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:52.112531   58676 pod_ready.go:81] duration metric: took 6.307404ms waiting for pod "etcd-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.112540   58676 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.119263   58676 pod_ready.go:92] pod "kube-apiserver-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:52.119296   58676 pod_ready.go:81] duration metric: took 6.748553ms waiting for pod "kube-apiserver-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.119311   58676 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.125594   58676 pod_ready.go:92] pod "kube-controller-manager-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:52.125619   58676 pod_ready.go:81] duration metric: took 6.30051ms waiting for pod "kube-controller-manager-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.125629   58676 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4cx5t" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.503777   58676 pod_ready.go:92] pod "kube-proxy-4cx5t" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:52.503802   58676 pod_ready.go:81] duration metric: took 378.166648ms waiting for pod "kube-proxy-4cx5t" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.503811   58676 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.904254   58676 pod_ready.go:92] pod "kube-scheduler-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:52.904275   58676 pod_ready.go:81] duration metric: took 400.457426ms waiting for pod "kube-scheduler-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.904284   58676 pod_ready.go:38] duration metric: took 4.984437509s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:06:52.904303   58676 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:06:52.904352   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:06:52.917549   58676 api_server.go:72] duration metric: took 5.295984843s to wait for apiserver process to appear ...
	I1101 01:06:52.917576   58676 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:06:52.917595   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:06:52.926515   58676 api_server.go:279] https://192.168.50.140:8443/healthz returned 200:
	ok
	I1101 01:06:52.927673   58676 api_server.go:141] control plane version: v1.28.3
	I1101 01:06:52.927692   58676 api_server.go:131] duration metric: took 10.109726ms to wait for apiserver health ...
	I1101 01:06:52.927700   58676 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:06:53.109620   58676 system_pods.go:59] 8 kube-system pods found
	I1101 01:06:53.109648   58676 system_pods.go:61] "coredns-5dd5756b68-m8v7v" [351a9458-075b-40d1-96d1-86a450a99251] Running
	I1101 01:06:53.109653   58676 system_pods.go:61] "etcd-no-preload-008483" [e1db4a59-f5e6-4114-a942-1faf4ff84af2] Running
	I1101 01:06:53.109657   58676 system_pods.go:61] "kube-apiserver-no-preload-008483" [f8f8bb39-3093-44bb-8255-5a7d78437a75] Running
	I1101 01:06:53.109661   58676 system_pods.go:61] "kube-controller-manager-no-preload-008483" [a45df9e4-3399-4c21-981f-3c3caaed52a8] Running
	I1101 01:06:53.109665   58676 system_pods.go:61] "kube-proxy-4cx5t" [57c1e87a-aa14-440d-9001-a6ba2ab7c8c6] Running
	I1101 01:06:53.109670   58676 system_pods.go:61] "kube-scheduler-no-preload-008483" [329b7a2d-6146-4e08-910e-ed4d40f57dcb] Running
	I1101 01:06:53.109676   58676 system_pods.go:61] "metrics-server-57f55c9bc5-qcxt7" [bf444b92-dd54-43fc-a9a8-0e9000b562e3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:53.109684   58676 system_pods.go:61] "storage-provisioner" [909163da-9021-4cee-9a72-1bc9b6ae9390] Running
	I1101 01:06:53.109693   58676 system_pods.go:74] duration metric: took 181.986766ms to wait for pod list to return data ...
	I1101 01:06:53.109706   58676 default_sa.go:34] waiting for default service account to be created ...
	I1101 01:06:53.305872   58676 default_sa.go:45] found service account: "default"
	I1101 01:06:53.305904   58676 default_sa.go:55] duration metric: took 196.187269ms for default service account to be created ...
	I1101 01:06:53.305919   58676 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 01:06:53.506566   58676 system_pods.go:86] 8 kube-system pods found
	I1101 01:06:53.506601   58676 system_pods.go:89] "coredns-5dd5756b68-m8v7v" [351a9458-075b-40d1-96d1-86a450a99251] Running
	I1101 01:06:53.506610   58676 system_pods.go:89] "etcd-no-preload-008483" [e1db4a59-f5e6-4114-a942-1faf4ff84af2] Running
	I1101 01:06:53.506618   58676 system_pods.go:89] "kube-apiserver-no-preload-008483" [f8f8bb39-3093-44bb-8255-5a7d78437a75] Running
	I1101 01:06:53.506625   58676 system_pods.go:89] "kube-controller-manager-no-preload-008483" [a45df9e4-3399-4c21-981f-3c3caaed52a8] Running
	I1101 01:06:53.506631   58676 system_pods.go:89] "kube-proxy-4cx5t" [57c1e87a-aa14-440d-9001-a6ba2ab7c8c6] Running
	I1101 01:06:53.506640   58676 system_pods.go:89] "kube-scheduler-no-preload-008483" [329b7a2d-6146-4e08-910e-ed4d40f57dcb] Running
	I1101 01:06:53.506651   58676 system_pods.go:89] "metrics-server-57f55c9bc5-qcxt7" [bf444b92-dd54-43fc-a9a8-0e9000b562e3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:53.506664   58676 system_pods.go:89] "storage-provisioner" [909163da-9021-4cee-9a72-1bc9b6ae9390] Running
	I1101 01:06:53.506675   58676 system_pods.go:126] duration metric: took 200.749464ms to wait for k8s-apps to be running ...
	I1101 01:06:53.506692   58676 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 01:06:53.506747   58676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:06:53.519471   58676 system_svc.go:56] duration metric: took 12.766173ms WaitForService to wait for kubelet.
	I1101 01:06:53.519502   58676 kubeadm.go:581] duration metric: took 5.897944072s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1101 01:06:53.519525   58676 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:06:53.705460   58676 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:06:53.705490   58676 node_conditions.go:123] node cpu capacity is 2
	I1101 01:06:53.705501   58676 node_conditions.go:105] duration metric: took 185.970851ms to run NodePressure ...
	I1101 01:06:53.705515   58676 start.go:228] waiting for startup goroutines ...
	I1101 01:06:53.705523   58676 start.go:233] waiting for cluster config update ...
	I1101 01:06:53.705537   58676 start.go:242] writing updated cluster config ...
	I1101 01:06:53.705824   58676 ssh_runner.go:195] Run: rm -f paused
	I1101 01:06:53.758508   58676 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1101 01:06:53.761998   58676 out.go:177] * Done! kubectl is now configured to use "no-preload-008483" cluster and "default" namespace by default
	I1101 01:06:55.226416   58823 system_pods.go:86] 5 kube-system pods found
	I1101 01:06:55.226443   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:55.226449   58823 system_pods.go:89] "kube-apiserver-old-k8s-version-330042" [1d813832-7c56-439f-aee9-c5c326e6cd3d] Pending
	I1101 01:06:55.226453   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:55.226460   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:55.226466   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:55.226480   58823 retry.go:31] will retry after 6.901184226s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:07:02.133379   58823 system_pods.go:86] 5 kube-system pods found
	I1101 01:07:02.133412   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:07:02.133421   58823 system_pods.go:89] "kube-apiserver-old-k8s-version-330042" [1d813832-7c56-439f-aee9-c5c326e6cd3d] Running
	I1101 01:07:02.133427   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:07:02.133442   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:07:02.133451   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:07:02.133471   58823 retry.go:31] will retry after 10.272464072s: missing components: etcd, kube-controller-manager, kube-scheduler
	I1101 01:07:12.412133   58823 system_pods.go:86] 5 kube-system pods found
	I1101 01:07:12.412166   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:07:12.412175   58823 system_pods.go:89] "kube-apiserver-old-k8s-version-330042" [1d813832-7c56-439f-aee9-c5c326e6cd3d] Running
	I1101 01:07:12.412181   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:07:12.412193   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:07:12.412202   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:07:12.412221   58823 retry.go:31] will retry after 11.290918588s: missing components: etcd, kube-controller-manager, kube-scheduler
	I1101 01:07:23.709462   58823 system_pods.go:86] 8 kube-system pods found
	I1101 01:07:23.709495   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:07:23.709503   58823 system_pods.go:89] "etcd-old-k8s-version-330042" [fc62fe53-9611-4b3d-9dca-a30d58618b2b] Running
	I1101 01:07:23.709510   58823 system_pods.go:89] "kube-apiserver-old-k8s-version-330042" [1d813832-7c56-439f-aee9-c5c326e6cd3d] Running
	I1101 01:07:23.709517   58823 system_pods.go:89] "kube-controller-manager-old-k8s-version-330042" [8ad0ccf9-fa8e-4205-b89c-f5f57cb7be6e] Running
	I1101 01:07:23.709524   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:07:23.709528   58823 system_pods.go:89] "kube-scheduler-old-k8s-version-330042" [2b077f6b-8077-4ccb-93c2-c6d3383b1113] Pending
	I1101 01:07:23.709534   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:07:23.709543   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:07:23.709559   58823 retry.go:31] will retry after 12.900513481s: missing components: kube-scheduler
	I1101 01:07:36.615720   58823 system_pods.go:86] 8 kube-system pods found
	I1101 01:07:36.615746   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:07:36.615751   58823 system_pods.go:89] "etcd-old-k8s-version-330042" [fc62fe53-9611-4b3d-9dca-a30d58618b2b] Running
	I1101 01:07:36.615756   58823 system_pods.go:89] "kube-apiserver-old-k8s-version-330042" [1d813832-7c56-439f-aee9-c5c326e6cd3d] Running
	I1101 01:07:36.615760   58823 system_pods.go:89] "kube-controller-manager-old-k8s-version-330042" [8ad0ccf9-fa8e-4205-b89c-f5f57cb7be6e] Running
	I1101 01:07:36.615763   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:07:36.615767   58823 system_pods.go:89] "kube-scheduler-old-k8s-version-330042" [2b077f6b-8077-4ccb-93c2-c6d3383b1113] Running
	I1101 01:07:36.615774   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:07:36.615780   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:07:36.615787   58823 system_pods.go:126] duration metric: took 1m10.123228938s to wait for k8s-apps to be running ...
	I1101 01:07:36.615793   58823 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 01:07:36.615837   58823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:07:36.634354   58823 system_svc.go:56] duration metric: took 18.547208ms WaitForService to wait for kubelet.
	I1101 01:07:36.634387   58823 kubeadm.go:581] duration metric: took 1m19.896166299s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1101 01:07:36.634412   58823 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:07:36.638286   58823 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:07:36.638315   58823 node_conditions.go:123] node cpu capacity is 2
	I1101 01:07:36.638329   58823 node_conditions.go:105] duration metric: took 3.911826ms to run NodePressure ...
	I1101 01:07:36.638344   58823 start.go:228] waiting for startup goroutines ...
	I1101 01:07:36.638351   58823 start.go:233] waiting for cluster config update ...
	I1101 01:07:36.638365   58823 start.go:242] writing updated cluster config ...
	I1101 01:07:36.638658   58823 ssh_runner.go:195] Run: rm -f paused
	I1101 01:07:36.688409   58823 start.go:600] kubectl: 1.28.3, cluster: 1.16.0 (minor skew: 12)
	I1101 01:07:36.690520   58823 out.go:177] 
	W1101 01:07:36.692006   58823 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.16.0.
	I1101 01:07:36.693512   58823 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1101 01:07:36.694940   58823 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-330042" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-11-01 01:00:47 UTC, ends at Wed 2023-11-01 01:15:25 UTC. --
	Nov 01 01:15:25 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:15:25.535766292Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698801325535741821,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=e530fb68-3dfe-4d15-8bbf-a3954cca84ad name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:15:25 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:15:25.536614312Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=30a9fa2d-6abb-4f2c-ae4c-ccfa771e61b0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:15:25 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:15:25.536678750Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=30a9fa2d-6abb-4f2c-ae4c-ccfa771e61b0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:15:25 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:15:25.536904232Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a85b8e74173dbc34ea106aa829e909bb3fdc9fc0aa01d5f03beec385939e885,PodSandboxId:af1db69584833a352404ac369d09504166c678f9aa4b89facb0dd0607707cc23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698800773059095055,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaba9583-e564-4804-9cd3-2b4de36c85da,},Annotations:map[string]string{io.kubernetes.container.hash: ac50747,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8745454c3ba65b39dcca4ec859db6eec9bde20e8655cef3187db49575282aa10,PodSandboxId:cfe8082e2628799d9efe7b672e81ddcad90a99dd001281bd3e01c9e33fb9b901,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698800772252049052,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kzgzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d59980-f28a-482c-9aa8-8502915417f0,},Annotations:map[string]string{io.kubernetes.container.hash: a7c19628,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22be33464e2b616389f8c1c9fe097420418464330ebba5269746922fb0dead46,PodSandboxId:7fbb88518b5739fdad0ad3c9ab7d26f2a104dc851ad0c5a93651276faa04d55a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698800771247642609,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rgzt8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d136c6a-e0b2-44c3-a17b-85649d6ff7b7,},Annotations:map[string]string{io.kubernetes.container.hash: c0e462ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f060c7640be39d364fe8967ac8f38f7e607548707a374220ef0feb1305678cf3,PodSandboxId:d5a7315324b17f0871c73c4759bac5ae2592a914739929d59ec9a6545d9acf35,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698800747606688658,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-639310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48958f49a129074eec
3f767ffb1dddd1,},Annotations:map[string]string{io.kubernetes.container.hash: 7ec6a6b6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:178b076f51f7e8b659548480b9d8ff724f062bfec5c6ec0c3084b6d182210a51,PodSandboxId:b24c09745f83dc0eb98666502bab147baf943158dfd4f937ac3eff1a6e79f77c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698800747007400537,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-639310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 758e58c461773c5e0d
a7f3fa9c9b2628,},Annotations:map[string]string{io.kubernetes.container.hash: c78fc5b1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa1d8001c088acee185e9dad86cadfffdb1d5d2b62e785ec1ecd9cf0628faa80,PodSandboxId:8ec5d4b8331c34bf93d80dd0902a599768985a0fa2db30d361a1603fbe6958dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698800746803011038,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-639310,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 22af6e3d9158739e028e940aca1196e5,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203adcd67c53b37280b1e4ca576ca64ec2acf717740fea98d8ab311db9f57ed3,PodSandboxId:4b46efeec38d55eccf4d2a8220af4bfeb16484d377e7133b99afc539a4f7659c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698800746611905028,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-639310,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 76224889b8a2b452d7f7b1ab03f60615,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=30a9fa2d-6abb-4f2c-ae4c-ccfa771e61b0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:15:25 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:15:25.587432340Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=67da3501-c3bc-4b0f-87da-bce583c871e1 name=/runtime.v1.RuntimeService/Version
	Nov 01 01:15:25 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:15:25.587487293Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=67da3501-c3bc-4b0f-87da-bce583c871e1 name=/runtime.v1.RuntimeService/Version
	Nov 01 01:15:25 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:15:25.589218245Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d4781bd7-a217-40bc-afb3-529992c2d327 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:15:25 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:15:25.589637881Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698801325589620873,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=d4781bd7-a217-40bc-afb3-529992c2d327 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:15:25 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:15:25.590973693Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=57549021-4104-426b-ba51-b9f0be87f953 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:15:25 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:15:25.591024496Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=57549021-4104-426b-ba51-b9f0be87f953 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:15:25 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:15:25.591346240Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a85b8e74173dbc34ea106aa829e909bb3fdc9fc0aa01d5f03beec385939e885,PodSandboxId:af1db69584833a352404ac369d09504166c678f9aa4b89facb0dd0607707cc23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698800773059095055,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaba9583-e564-4804-9cd3-2b4de36c85da,},Annotations:map[string]string{io.kubernetes.container.hash: ac50747,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8745454c3ba65b39dcca4ec859db6eec9bde20e8655cef3187db49575282aa10,PodSandboxId:cfe8082e2628799d9efe7b672e81ddcad90a99dd001281bd3e01c9e33fb9b901,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698800772252049052,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kzgzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d59980-f28a-482c-9aa8-8502915417f0,},Annotations:map[string]string{io.kubernetes.container.hash: a7c19628,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22be33464e2b616389f8c1c9fe097420418464330ebba5269746922fb0dead46,PodSandboxId:7fbb88518b5739fdad0ad3c9ab7d26f2a104dc851ad0c5a93651276faa04d55a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698800771247642609,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rgzt8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d136c6a-e0b2-44c3-a17b-85649d6ff7b7,},Annotations:map[string]string{io.kubernetes.container.hash: c0e462ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f060c7640be39d364fe8967ac8f38f7e607548707a374220ef0feb1305678cf3,PodSandboxId:d5a7315324b17f0871c73c4759bac5ae2592a914739929d59ec9a6545d9acf35,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698800747606688658,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-639310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48958f49a129074eec
3f767ffb1dddd1,},Annotations:map[string]string{io.kubernetes.container.hash: 7ec6a6b6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:178b076f51f7e8b659548480b9d8ff724f062bfec5c6ec0c3084b6d182210a51,PodSandboxId:b24c09745f83dc0eb98666502bab147baf943158dfd4f937ac3eff1a6e79f77c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698800747007400537,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-639310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 758e58c461773c5e0d
a7f3fa9c9b2628,},Annotations:map[string]string{io.kubernetes.container.hash: c78fc5b1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa1d8001c088acee185e9dad86cadfffdb1d5d2b62e785ec1ecd9cf0628faa80,PodSandboxId:8ec5d4b8331c34bf93d80dd0902a599768985a0fa2db30d361a1603fbe6958dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698800746803011038,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-639310,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 22af6e3d9158739e028e940aca1196e5,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203adcd67c53b37280b1e4ca576ca64ec2acf717740fea98d8ab311db9f57ed3,PodSandboxId:4b46efeec38d55eccf4d2a8220af4bfeb16484d377e7133b99afc539a4f7659c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698800746611905028,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-639310,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 76224889b8a2b452d7f7b1ab03f60615,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=57549021-4104-426b-ba51-b9f0be87f953 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:15:25 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:15:25.640268921Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=60960899-7b6b-47ca-956c-09eac1a6e95b name=/runtime.v1.RuntimeService/Version
	Nov 01 01:15:25 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:15:25.640355173Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=60960899-7b6b-47ca-956c-09eac1a6e95b name=/runtime.v1.RuntimeService/Version
	Nov 01 01:15:25 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:15:25.641810877Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f2c8d6dd-4d42-4211-be5e-682ab32aabaa name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:15:25 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:15:25.642424365Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698801325642402526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=f2c8d6dd-4d42-4211-be5e-682ab32aabaa name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:15:25 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:15:25.643219248Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=941872f4-1bfd-4021-a40a-17beb567700f name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:15:25 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:15:25.643288791Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=941872f4-1bfd-4021-a40a-17beb567700f name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:15:25 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:15:25.643502045Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a85b8e74173dbc34ea106aa829e909bb3fdc9fc0aa01d5f03beec385939e885,PodSandboxId:af1db69584833a352404ac369d09504166c678f9aa4b89facb0dd0607707cc23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698800773059095055,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaba9583-e564-4804-9cd3-2b4de36c85da,},Annotations:map[string]string{io.kubernetes.container.hash: ac50747,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8745454c3ba65b39dcca4ec859db6eec9bde20e8655cef3187db49575282aa10,PodSandboxId:cfe8082e2628799d9efe7b672e81ddcad90a99dd001281bd3e01c9e33fb9b901,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698800772252049052,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kzgzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d59980-f28a-482c-9aa8-8502915417f0,},Annotations:map[string]string{io.kubernetes.container.hash: a7c19628,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22be33464e2b616389f8c1c9fe097420418464330ebba5269746922fb0dead46,PodSandboxId:7fbb88518b5739fdad0ad3c9ab7d26f2a104dc851ad0c5a93651276faa04d55a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698800771247642609,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rgzt8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d136c6a-e0b2-44c3-a17b-85649d6ff7b7,},Annotations:map[string]string{io.kubernetes.container.hash: c0e462ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f060c7640be39d364fe8967ac8f38f7e607548707a374220ef0feb1305678cf3,PodSandboxId:d5a7315324b17f0871c73c4759bac5ae2592a914739929d59ec9a6545d9acf35,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698800747606688658,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-639310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48958f49a129074eec
3f767ffb1dddd1,},Annotations:map[string]string{io.kubernetes.container.hash: 7ec6a6b6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:178b076f51f7e8b659548480b9d8ff724f062bfec5c6ec0c3084b6d182210a51,PodSandboxId:b24c09745f83dc0eb98666502bab147baf943158dfd4f937ac3eff1a6e79f77c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698800747007400537,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-639310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 758e58c461773c5e0d
a7f3fa9c9b2628,},Annotations:map[string]string{io.kubernetes.container.hash: c78fc5b1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa1d8001c088acee185e9dad86cadfffdb1d5d2b62e785ec1ecd9cf0628faa80,PodSandboxId:8ec5d4b8331c34bf93d80dd0902a599768985a0fa2db30d361a1603fbe6958dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698800746803011038,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-639310,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 22af6e3d9158739e028e940aca1196e5,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203adcd67c53b37280b1e4ca576ca64ec2acf717740fea98d8ab311db9f57ed3,PodSandboxId:4b46efeec38d55eccf4d2a8220af4bfeb16484d377e7133b99afc539a4f7659c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698800746611905028,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-639310,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 76224889b8a2b452d7f7b1ab03f60615,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=941872f4-1bfd-4021-a40a-17beb567700f name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:15:25 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:15:25.693446510Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=042cf502-74d7-478d-ae7e-13bc21d20d12 name=/runtime.v1.RuntimeService/Version
	Nov 01 01:15:25 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:15:25.693569635Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=042cf502-74d7-478d-ae7e-13bc21d20d12 name=/runtime.v1.RuntimeService/Version
	Nov 01 01:15:25 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:15:25.694729684Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=48384fbb-8d22-4c1a-be90-0e6ad1aca402 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:15:25 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:15:25.695096481Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698801325695084466,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=48384fbb-8d22-4c1a-be90-0e6ad1aca402 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:15:25 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:15:25.695901879Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=70c6dca4-6c84-402c-a7c7-2bd1d67204e1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:15:25 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:15:25.695945040Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=70c6dca4-6c84-402c-a7c7-2bd1d67204e1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:15:25 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:15:25.696108615Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a85b8e74173dbc34ea106aa829e909bb3fdc9fc0aa01d5f03beec385939e885,PodSandboxId:af1db69584833a352404ac369d09504166c678f9aa4b89facb0dd0607707cc23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698800773059095055,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaba9583-e564-4804-9cd3-2b4de36c85da,},Annotations:map[string]string{io.kubernetes.container.hash: ac50747,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8745454c3ba65b39dcca4ec859db6eec9bde20e8655cef3187db49575282aa10,PodSandboxId:cfe8082e2628799d9efe7b672e81ddcad90a99dd001281bd3e01c9e33fb9b901,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698800772252049052,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kzgzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d59980-f28a-482c-9aa8-8502915417f0,},Annotations:map[string]string{io.kubernetes.container.hash: a7c19628,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22be33464e2b616389f8c1c9fe097420418464330ebba5269746922fb0dead46,PodSandboxId:7fbb88518b5739fdad0ad3c9ab7d26f2a104dc851ad0c5a93651276faa04d55a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698800771247642609,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rgzt8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d136c6a-e0b2-44c3-a17b-85649d6ff7b7,},Annotations:map[string]string{io.kubernetes.container.hash: c0e462ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f060c7640be39d364fe8967ac8f38f7e607548707a374220ef0feb1305678cf3,PodSandboxId:d5a7315324b17f0871c73c4759bac5ae2592a914739929d59ec9a6545d9acf35,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698800747606688658,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-639310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48958f49a129074eec
3f767ffb1dddd1,},Annotations:map[string]string{io.kubernetes.container.hash: 7ec6a6b6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:178b076f51f7e8b659548480b9d8ff724f062bfec5c6ec0c3084b6d182210a51,PodSandboxId:b24c09745f83dc0eb98666502bab147baf943158dfd4f937ac3eff1a6e79f77c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698800747007400537,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-639310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 758e58c461773c5e0d
a7f3fa9c9b2628,},Annotations:map[string]string{io.kubernetes.container.hash: c78fc5b1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa1d8001c088acee185e9dad86cadfffdb1d5d2b62e785ec1ecd9cf0628faa80,PodSandboxId:8ec5d4b8331c34bf93d80dd0902a599768985a0fa2db30d361a1603fbe6958dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698800746803011038,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-639310,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 22af6e3d9158739e028e940aca1196e5,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203adcd67c53b37280b1e4ca576ca64ec2acf717740fea98d8ab311db9f57ed3,PodSandboxId:4b46efeec38d55eccf4d2a8220af4bfeb16484d377e7133b99afc539a4f7659c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698800746611905028,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-639310,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 76224889b8a2b452d7f7b1ab03f60615,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=70c6dca4-6c84-402c-a7c7-2bd1d67204e1 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5a85b8e74173d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   af1db69584833       storage-provisioner
	8745454c3ba65       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   9 minutes ago       Running             kube-proxy                0                   cfe8082e26287       kube-proxy-kzgzn
	22be33464e2b6       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   7fbb88518b573       coredns-5dd5756b68-rgzt8
	f060c7640be39       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   d5a7315324b17       etcd-default-k8s-diff-port-639310
	178b076f51f7e       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   9 minutes ago       Running             kube-apiserver            2                   b24c09745f83d       kube-apiserver-default-k8s-diff-port-639310
	fa1d8001c088a       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   9 minutes ago       Running             kube-controller-manager   2                   8ec5d4b8331c3       kube-controller-manager-default-k8s-diff-port-639310
	203adcd67c53b       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   9 minutes ago       Running             kube-scheduler            2                   4b46efeec38d5       kube-scheduler-default-k8s-diff-port-639310
	
	* 
	* ==> coredns [22be33464e2b616389f8c1c9fe097420418464330ebba5269746922fb0dead46] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	[INFO] Reloading complete
	[INFO] 127.0.0.1:37063 - 58527 "HINFO IN 1214933915492992955.7899154619537145924. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010486748s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-639310
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-639310
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9
	                    minikube.k8s.io/name=default-k8s-diff-port-639310
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_01T01_05_56_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Nov 2023 01:05:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-639310
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Nov 2023 01:15:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Nov 2023 01:11:23 +0000   Wed, 01 Nov 2023 01:05:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Nov 2023 01:11:23 +0000   Wed, 01 Nov 2023 01:05:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Nov 2023 01:11:23 +0000   Wed, 01 Nov 2023 01:05:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Nov 2023 01:11:23 +0000   Wed, 01 Nov 2023 01:06:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.97
	  Hostname:    default-k8s-diff-port-639310
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 51f666f978d14a798a25b310a75e9d1b
	  System UUID:                51f666f9-78d1-4a79-8a25-b310a75e9d1b
	  Boot ID:                    b1b0235a-b85b-46ce-90bc-48cb264be07e
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-rgzt8                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m17s
	  kube-system                 etcd-default-k8s-diff-port-639310                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m29s
	  kube-system                 kube-apiserver-default-k8s-diff-port-639310             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m29s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-639310    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m29s
	  kube-system                 kube-proxy-kzgzn                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-scheduler-default-k8s-diff-port-639310             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m29s
	  kube-system                 metrics-server-57f55c9bc5-65ph4                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m14s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m12s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m40s (x8 over 9m40s)  kubelet          Node default-k8s-diff-port-639310 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m40s (x8 over 9m40s)  kubelet          Node default-k8s-diff-port-639310 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m40s (x7 over 9m40s)  kubelet          Node default-k8s-diff-port-639310 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m29s                  kubelet          Node default-k8s-diff-port-639310 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m29s                  kubelet          Node default-k8s-diff-port-639310 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m29s                  kubelet          Node default-k8s-diff-port-639310 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m29s                  kubelet          Node default-k8s-diff-port-639310 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m29s                  kubelet          Starting kubelet.
	  Normal  NodeReady                9m19s                  kubelet          Node default-k8s-diff-port-639310 status is now: NodeReady
	  Normal  RegisteredNode           9m18s                  node-controller  Node default-k8s-diff-port-639310 event: Registered Node default-k8s-diff-port-639310 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov 1 01:00] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.064736] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.600812] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.925421] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.139325] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.404456] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000012] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.614211] systemd-fstab-generator[637]: Ignoring "noauto" for root device
	[  +0.135036] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.171280] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.117222] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.266267] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[Nov 1 01:01] systemd-fstab-generator[911]: Ignoring "noauto" for root device
	[ +20.273945] kauditd_printk_skb: 29 callbacks suppressed
	[Nov 1 01:05] systemd-fstab-generator[3480]: Ignoring "noauto" for root device
	[ +10.817729] systemd-fstab-generator[3799]: Ignoring "noauto" for root device
	[Nov 1 01:06] kauditd_printk_skb: 2 callbacks suppressed
	[ +12.927497] kauditd_printk_skb: 9 callbacks suppressed
	
	* 
	* ==> etcd [f060c7640be39d364fe8967ac8f38f7e607548707a374220ef0feb1305678cf3] <==
	* {"level":"info","ts":"2023-11-01T01:05:49.927989Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c03d6f8665d98ba switched to configuration voters=(865771915742910650)"}
	{"level":"info","ts":"2023-11-01T01:05:49.928375Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d703df346b154168","local-member-id":"c03d6f8665d98ba","added-peer-id":"c03d6f8665d98ba","added-peer-peer-urls":["https://192.168.72.97:2380"]}
	{"level":"info","ts":"2023-11-01T01:05:49.944428Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-01T01:05:49.944703Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"c03d6f8665d98ba","initial-advertise-peer-urls":["https://192.168.72.97:2380"],"listen-peer-urls":["https://192.168.72.97:2380"],"advertise-client-urls":["https://192.168.72.97:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.97:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-01T01:05:49.944765Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-01T01:05:49.94487Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.97:2380"}
	{"level":"info","ts":"2023-11-01T01:05:49.944903Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.97:2380"}
	{"level":"info","ts":"2023-11-01T01:05:50.066984Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c03d6f8665d98ba is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-01T01:05:50.067285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c03d6f8665d98ba became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-01T01:05:50.067397Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c03d6f8665d98ba received MsgPreVoteResp from c03d6f8665d98ba at term 1"}
	{"level":"info","ts":"2023-11-01T01:05:50.067583Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c03d6f8665d98ba became candidate at term 2"}
	{"level":"info","ts":"2023-11-01T01:05:50.067694Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c03d6f8665d98ba received MsgVoteResp from c03d6f8665d98ba at term 2"}
	{"level":"info","ts":"2023-11-01T01:05:50.06773Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c03d6f8665d98ba became leader at term 2"}
	{"level":"info","ts":"2023-11-01T01:05:50.067836Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c03d6f8665d98ba elected leader c03d6f8665d98ba at term 2"}
	{"level":"info","ts":"2023-11-01T01:05:50.07204Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T01:05:50.072081Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c03d6f8665d98ba","local-member-attributes":"{Name:default-k8s-diff-port-639310 ClientURLs:[https://192.168.72.97:2379]}","request-path":"/0/members/c03d6f8665d98ba/attributes","cluster-id":"d703df346b154168","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-01T01:05:50.073101Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-01T01:05:50.073533Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-01T01:05:50.073672Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-01T01:05:50.073216Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-01T01:05:50.074483Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-01T01:05:50.075299Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.97:2379"}
	{"level":"info","ts":"2023-11-01T01:05:50.086586Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d703df346b154168","local-member-id":"c03d6f8665d98ba","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T01:05:50.08688Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T01:05:50.08696Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  01:15:26 up 14 min,  0 users,  load average: 0.13, 0.22, 0.18
	Linux default-k8s-diff-port-639310 5.10.57 #1 SMP Tue Oct 31 22:14:31 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [178b076f51f7e8b659548480b9d8ff724f062bfec5c6ec0c3084b6d182210a51] <==
	* W1101 01:10:53.486791       1 handler_proxy.go:93] no RequestInfo found in the context
	E1101 01:10:53.487046       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	W1101 01:10:53.486973       1 handler_proxy.go:93] no RequestInfo found in the context
	I1101 01:10:53.487098       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1101 01:10:53.487321       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1101 01:10:53.488601       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1101 01:11:52.378610       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1101 01:11:53.488581       1 handler_proxy.go:93] no RequestInfo found in the context
	E1101 01:11:53.488731       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1101 01:11:53.488773       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1101 01:11:53.489027       1 handler_proxy.go:93] no RequestInfo found in the context
	E1101 01:11:53.489211       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1101 01:11:53.490204       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1101 01:12:52.378460       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1101 01:13:52.378756       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1101 01:13:53.489652       1 handler_proxy.go:93] no RequestInfo found in the context
	E1101 01:13:53.489712       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1101 01:13:53.489727       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1101 01:13:53.490926       1 handler_proxy.go:93] no RequestInfo found in the context
	E1101 01:13:53.491093       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1101 01:13:53.491191       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1101 01:14:52.378777       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [fa1d8001c088acee185e9dad86cadfffdb1d5d2b62e785ec1ecd9cf0628faa80] <==
	* I1101 01:09:41.547371       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="129.006µs"
	E1101 01:10:07.524241       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:10:07.963231       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:10:37.533078       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:10:37.976605       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:11:07.541351       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:11:07.986890       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:11:37.548779       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:11:37.996241       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:12:07.555658       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:12:08.005607       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1101 01:12:11.546634       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="469.445µs"
	I1101 01:12:22.545596       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="695.342µs"
	E1101 01:12:37.561446       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:12:38.015387       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:13:07.569841       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:13:08.025309       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:13:37.575483       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:13:38.034805       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:14:07.584934       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:14:08.047393       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:14:37.591225       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:14:38.056850       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:15:07.598897       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:15:08.069542       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [8745454c3ba65b39dcca4ec859db6eec9bde20e8655cef3187db49575282aa10] <==
	* I1101 01:06:13.113818       1 server_others.go:69] "Using iptables proxy"
	I1101 01:06:13.161889       1 node.go:141] Successfully retrieved node IP: 192.168.72.97
	I1101 01:06:13.299215       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1101 01:06:13.299345       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 01:06:13.302417       1 server_others.go:152] "Using iptables Proxier"
	I1101 01:06:13.303537       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 01:06:13.307214       1 server.go:846] "Version info" version="v1.28.3"
	I1101 01:06:13.307431       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 01:06:13.311259       1 config.go:188] "Starting service config controller"
	I1101 01:06:13.311596       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 01:06:13.311654       1 config.go:97] "Starting endpoint slice config controller"
	I1101 01:06:13.311672       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 01:06:13.314056       1 config.go:315] "Starting node config controller"
	I1101 01:06:13.314106       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 01:06:13.412086       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1101 01:06:13.412103       1 shared_informer.go:318] Caches are synced for service config
	I1101 01:06:13.415474       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [203adcd67c53b37280b1e4ca576ca64ec2acf717740fea98d8ab311db9f57ed3] <==
	* W1101 01:05:52.572349       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1101 01:05:52.572614       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1101 01:05:52.572635       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1101 01:05:52.572730       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1101 01:05:53.399206       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1101 01:05:53.399258       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1101 01:05:53.456518       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1101 01:05:53.456571       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1101 01:05:53.471411       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1101 01:05:53.471572       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1101 01:05:53.541709       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1101 01:05:53.541766       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1101 01:05:53.550451       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1101 01:05:53.550542       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1101 01:05:53.557388       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1101 01:05:53.557487       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1101 01:05:53.645840       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1101 01:05:53.645886       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1101 01:05:53.764061       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1101 01:05:53.764264       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 01:05:53.899340       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1101 01:05:53.899382       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1101 01:05:53.939919       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1101 01:05:53.939979       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1101 01:05:55.548472       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-11-01 01:00:47 UTC, ends at Wed 2023-11-01 01:15:26 UTC. --
	Nov 01 01:12:49 default-k8s-diff-port-639310 kubelet[3806]: E1101 01:12:49.527712    3806 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-65ph4" podUID="4683706e-65f6-4845-a5ad-60da8cd20d8e"
	Nov 01 01:12:56 default-k8s-diff-port-639310 kubelet[3806]: E1101 01:12:56.660217    3806 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 01 01:12:56 default-k8s-diff-port-639310 kubelet[3806]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 01 01:12:56 default-k8s-diff-port-639310 kubelet[3806]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 01 01:12:56 default-k8s-diff-port-639310 kubelet[3806]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 01 01:13:04 default-k8s-diff-port-639310 kubelet[3806]: E1101 01:13:04.528018    3806 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-65ph4" podUID="4683706e-65f6-4845-a5ad-60da8cd20d8e"
	Nov 01 01:13:15 default-k8s-diff-port-639310 kubelet[3806]: E1101 01:13:15.527108    3806 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-65ph4" podUID="4683706e-65f6-4845-a5ad-60da8cd20d8e"
	Nov 01 01:13:30 default-k8s-diff-port-639310 kubelet[3806]: E1101 01:13:30.527755    3806 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-65ph4" podUID="4683706e-65f6-4845-a5ad-60da8cd20d8e"
	Nov 01 01:13:44 default-k8s-diff-port-639310 kubelet[3806]: E1101 01:13:44.528697    3806 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-65ph4" podUID="4683706e-65f6-4845-a5ad-60da8cd20d8e"
	Nov 01 01:13:55 default-k8s-diff-port-639310 kubelet[3806]: E1101 01:13:55.527015    3806 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-65ph4" podUID="4683706e-65f6-4845-a5ad-60da8cd20d8e"
	Nov 01 01:13:56 default-k8s-diff-port-639310 kubelet[3806]: E1101 01:13:56.660519    3806 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 01 01:13:56 default-k8s-diff-port-639310 kubelet[3806]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 01 01:13:56 default-k8s-diff-port-639310 kubelet[3806]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 01 01:13:56 default-k8s-diff-port-639310 kubelet[3806]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 01 01:14:07 default-k8s-diff-port-639310 kubelet[3806]: E1101 01:14:07.527100    3806 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-65ph4" podUID="4683706e-65f6-4845-a5ad-60da8cd20d8e"
	Nov 01 01:14:20 default-k8s-diff-port-639310 kubelet[3806]: E1101 01:14:20.528071    3806 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-65ph4" podUID="4683706e-65f6-4845-a5ad-60da8cd20d8e"
	Nov 01 01:14:35 default-k8s-diff-port-639310 kubelet[3806]: E1101 01:14:35.526281    3806 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-65ph4" podUID="4683706e-65f6-4845-a5ad-60da8cd20d8e"
	Nov 01 01:14:46 default-k8s-diff-port-639310 kubelet[3806]: E1101 01:14:46.528329    3806 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-65ph4" podUID="4683706e-65f6-4845-a5ad-60da8cd20d8e"
	Nov 01 01:14:56 default-k8s-diff-port-639310 kubelet[3806]: E1101 01:14:56.560514    3806 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 01 01:14:56 default-k8s-diff-port-639310 kubelet[3806]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 01 01:14:56 default-k8s-diff-port-639310 kubelet[3806]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 01 01:14:56 default-k8s-diff-port-639310 kubelet[3806]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 01 01:15:00 default-k8s-diff-port-639310 kubelet[3806]: E1101 01:15:00.528562    3806 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-65ph4" podUID="4683706e-65f6-4845-a5ad-60da8cd20d8e"
	Nov 01 01:15:14 default-k8s-diff-port-639310 kubelet[3806]: E1101 01:15:14.527388    3806 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-65ph4" podUID="4683706e-65f6-4845-a5ad-60da8cd20d8e"
	Nov 01 01:15:25 default-k8s-diff-port-639310 kubelet[3806]: E1101 01:15:25.528078    3806 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-65ph4" podUID="4683706e-65f6-4845-a5ad-60da8cd20d8e"
	
	* 
	* ==> storage-provisioner [5a85b8e74173dbc34ea106aa829e909bb3fdc9fc0aa01d5f03beec385939e885] <==
	* I1101 01:06:13.243855       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 01:06:13.264607       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 01:06:13.264799       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1101 01:06:13.281818       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 01:06:13.282122       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-639310_81ff36f9-e443-479a-89fd-151df6d8833d!
	I1101 01:06:13.289680       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0dee62d4-05e8-4647-9976-47e7e68b166b", APIVersion:"v1", ResourceVersion:"458", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-639310_81ff36f9-e443-479a-89fd-151df6d8833d became leader
	I1101 01:06:13.383184       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-639310_81ff36f9-e443-479a-89fd-151df6d8833d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-639310 -n default-k8s-diff-port-639310
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-639310 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-65ph4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-639310 describe pod metrics-server-57f55c9bc5-65ph4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-639310 describe pod metrics-server-57f55c9bc5-65ph4: exit status 1 (73.005794ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-65ph4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-639310 describe pod metrics-server-57f55c9bc5-65ph4: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1101 01:07:16.006190   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/functional-736766/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-008483 -n no-preload-008483
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-11-01 01:15:54.345237458 +0000 UTC m=+5527.909819453
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-008483 -n no-preload-008483
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-008483 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-008483 logs -n 25: (1.597889981s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p flannel-090856 sudo                                 | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | containerd config dump                                 |                              |         |                |                     |                     |
	| ssh     | -p flannel-090856 sudo                                 | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | systemctl status crio --all                            |                              |         |                |                     |                     |
	|         | --full --no-pager                                      |                              |         |                |                     |                     |
	| ssh     | -p flannel-090856 sudo                                 | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |                |                     |                     |
	| start   | -p embed-certs-754132                                  | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:52 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| ssh     | -p flannel-090856 sudo find                            | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |                |                     |                     |
	| ssh     | -p flannel-090856 sudo crio                            | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | config                                                 |                              |         |                |                     |                     |
	| delete  | -p flannel-090856                                      | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	| delete  | -p                                                     | disable-driver-mounts-130996 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | disable-driver-mounts-130996                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:53 UTC |
	|         | default-k8s-diff-port-639310                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-008483             | no-preload-008483            | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC | 01 Nov 23 00:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-008483                                   | no-preload-008483            | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-754132            | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC | 01 Nov 23 00:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-754132                                  | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-330042        | old-k8s-version-330042       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC | 01 Nov 23 00:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-330042                              | old-k8s-version-330042       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-639310  | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:53 UTC | 01 Nov 23 00:53 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:53 UTC |                     |
	|         | default-k8s-diff-port-639310                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-008483                  | no-preload-008483            | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-754132                 | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-008483                                   | no-preload-008483            | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC | 01 Nov 23 01:06 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| start   | -p embed-certs-754132                                  | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC | 01 Nov 23 01:05 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-330042             | old-k8s-version-330042       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-330042                              | old-k8s-version-330042       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC | 01 Nov 23 01:07 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-639310       | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:56 UTC | 01 Nov 23 01:06 UTC |
	|         | default-k8s-diff-port-639310                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/01 00:56:25
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 00:56:25.029853   59148 out.go:296] Setting OutFile to fd 1 ...
	I1101 00:56:25.030119   59148 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:56:25.030128   59148 out.go:309] Setting ErrFile to fd 2...
	I1101 00:56:25.030133   59148 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:56:25.030311   59148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7305/.minikube/bin
	I1101 00:56:25.030856   59148 out.go:303] Setting JSON to false
	I1101 00:56:25.031741   59148 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5930,"bootTime":1698794255,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 00:56:25.031805   59148 start.go:138] virtualization: kvm guest
	I1101 00:56:25.034341   59148 out.go:177] * [default-k8s-diff-port-639310] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1101 00:56:25.036261   59148 out.go:177]   - MINIKUBE_LOCATION=17486
	I1101 00:56:25.037829   59148 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 00:56:25.036294   59148 notify.go:220] Checking for updates...
	I1101 00:56:25.041068   59148 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 00:56:25.042691   59148 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7305/.minikube
	I1101 00:56:25.044204   59148 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 00:56:25.045719   59148 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 00:56:25.047781   59148 config.go:182] Loaded profile config "default-k8s-diff-port-639310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:56:25.048183   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 00:56:25.048245   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:56:25.062714   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34345
	I1101 00:56:25.063108   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:56:25.063662   59148 main.go:141] libmachine: Using API Version  1
	I1101 00:56:25.063682   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:56:25.064083   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:56:25.064302   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 00:56:25.064571   59148 driver.go:378] Setting default libvirt URI to qemu:///system
	I1101 00:56:25.064917   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 00:56:25.064958   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:56:25.079214   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46451
	I1101 00:56:25.079576   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:56:25.080090   59148 main.go:141] libmachine: Using API Version  1
	I1101 00:56:25.080115   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:56:25.080419   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:56:25.080616   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 00:56:25.119015   59148 out.go:177] * Using the kvm2 driver based on existing profile
	I1101 00:56:25.120650   59148 start.go:298] selected driver: kvm2
	I1101 00:56:25.120670   59148 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-639310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-639310 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.97 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:56:25.120819   59148 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 00:56:25.121515   59148 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:56:25.121580   59148 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17486-7305/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1101 00:56:25.137482   59148 install.go:137] /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1101 00:56:25.137885   59148 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 00:56:25.137962   59148 cni.go:84] Creating CNI manager for ""
	I1101 00:56:25.137976   59148 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 00:56:25.137988   59148 start_flags.go:323] config:
	{Name:default-k8s-diff-port-639310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-63931
0 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.97 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:56:25.138186   59148 iso.go:125] acquiring lock: {Name:mk1f649ca0b7c1ae293cd66cb85f9eeda028b20b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:56:25.140405   59148 out.go:177] * Starting control plane node default-k8s-diff-port-639310 in cluster default-k8s-diff-port-639310
	I1101 00:56:25.141855   59148 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 00:56:25.141918   59148 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1101 00:56:25.141935   59148 cache.go:56] Caching tarball of preloaded images
	I1101 00:56:25.142048   59148 preload.go:174] Found /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 00:56:25.142066   59148 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1101 00:56:25.142204   59148 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/config.json ...
	I1101 00:56:25.142449   59148 start.go:365] acquiring machines lock for default-k8s-diff-port-639310: {Name:mk7aad88408c319111b9be8e59d9593a9e88374b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 00:56:26.060176   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:56:29.132322   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:56:35.212221   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:56:38.284225   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:56:44.364219   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:56:47.436224   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:56:53.516201   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:56:56.588256   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:02.668213   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:05.740252   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:11.820242   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:14.892259   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:20.972213   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:24.044181   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:30.124291   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:33.196239   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:39.276183   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:42.348235   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:48.428230   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:51.500275   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:57.580250   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:00.652208   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:06.732207   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:09.804251   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:15.884265   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:18.956206   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:25.040217   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:28.108288   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:34.188238   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:37.260268   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:43.340210   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:46.412248   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:52.492221   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:55.564188   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:01.644193   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:04.716194   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:10.796265   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:13.868226   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:19.948219   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:23.020283   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:29.100251   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:32.172268   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:38.252219   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:41.324223   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:47.404323   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:50.476273   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:53.480339   58730 start.go:369] acquired machines lock for "embed-certs-754132" in 4m35.118425724s
	I1101 00:59:53.480387   58730 start.go:96] Skipping create...Using existing machine configuration
	I1101 00:59:53.480393   58730 fix.go:54] fixHost starting: 
	I1101 00:59:53.480707   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 00:59:53.480737   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:59:53.495582   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34891
	I1101 00:59:53.495998   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:59:53.496445   58730 main.go:141] libmachine: Using API Version  1
	I1101 00:59:53.496466   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:59:53.496844   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:59:53.497017   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 00:59:53.497171   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetState
	I1101 00:59:53.498937   58730 fix.go:102] recreateIfNeeded on embed-certs-754132: state=Stopped err=<nil>
	I1101 00:59:53.498956   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	W1101 00:59:53.499128   58730 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 00:59:53.500909   58730 out.go:177] * Restarting existing kvm2 VM for "embed-certs-754132" ...
	I1101 00:59:53.478140   58676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 00:59:53.478177   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 00:59:53.480187   58676 machine.go:91] provisioned docker machine in 4m37.408348367s
	I1101 00:59:53.480232   58676 fix.go:56] fixHost completed within 4m37.430154401s
	I1101 00:59:53.480241   58676 start.go:83] releasing machines lock for "no-preload-008483", held for 4m37.430178737s
	W1101 00:59:53.480270   58676 start.go:691] error starting host: provision: host is not running
	W1101 00:59:53.480361   58676 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1101 00:59:53.480371   58676 start.go:706] Will try again in 5 seconds ...
	I1101 00:59:53.502467   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Start
	I1101 00:59:53.502656   58730 main.go:141] libmachine: (embed-certs-754132) Ensuring networks are active...
	I1101 00:59:53.503633   58730 main.go:141] libmachine: (embed-certs-754132) Ensuring network default is active
	I1101 00:59:53.504036   58730 main.go:141] libmachine: (embed-certs-754132) Ensuring network mk-embed-certs-754132 is active
	I1101 00:59:53.504557   58730 main.go:141] libmachine: (embed-certs-754132) Getting domain xml...
	I1101 00:59:53.505302   58730 main.go:141] libmachine: (embed-certs-754132) Creating domain...
	I1101 00:59:54.749625   58730 main.go:141] libmachine: (embed-certs-754132) Waiting to get IP...
	I1101 00:59:54.750551   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:54.750924   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:54.751002   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:54.750917   59675 retry.go:31] will retry after 295.652358ms: waiting for machine to come up
	I1101 00:59:55.048450   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:55.048884   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:55.048910   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:55.048845   59675 retry.go:31] will retry after 335.376353ms: waiting for machine to come up
	I1101 00:59:55.385612   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:55.385959   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:55.386000   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:55.385952   59675 retry.go:31] will retry after 353.381783ms: waiting for machine to come up
	I1101 00:59:55.740456   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:55.740943   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:55.740979   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:55.740874   59675 retry.go:31] will retry after 417.863733ms: waiting for machine to come up
	I1101 00:59:56.160773   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:56.161271   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:56.161298   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:56.161236   59675 retry.go:31] will retry after 659.454883ms: waiting for machine to come up
	I1101 00:59:56.822139   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:56.822551   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:56.822573   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:56.822511   59675 retry.go:31] will retry after 627.06089ms: waiting for machine to come up
	I1101 00:59:57.451254   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:57.451659   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:57.451687   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:57.451624   59675 retry.go:31] will retry after 1.095096876s: waiting for machine to come up
	I1101 00:59:58.481145   58676 start.go:365] acquiring machines lock for no-preload-008483: {Name:mk7aad88408c319111b9be8e59d9593a9e88374b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 00:59:58.548870   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:58.549359   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:58.549410   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:58.549323   59675 retry.go:31] will retry after 1.133377858s: waiting for machine to come up
	I1101 00:59:59.684741   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:59.685182   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:59.685205   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:59.685149   59675 retry.go:31] will retry after 1.332824718s: waiting for machine to come up
	I1101 01:00:01.019662   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:01.020166   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 01:00:01.020217   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 01:00:01.020119   59675 retry.go:31] will retry after 1.62664347s: waiting for machine to come up
	I1101 01:00:02.649017   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:02.649459   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 01:00:02.649490   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 01:00:02.649404   59675 retry.go:31] will retry after 2.043788133s: waiting for machine to come up
	I1101 01:00:04.695225   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:04.695657   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 01:00:04.695711   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 01:00:04.695640   59675 retry.go:31] will retry after 2.435347975s: waiting for machine to come up
	I1101 01:00:07.133078   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:07.133531   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 01:00:07.133567   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 01:00:07.133492   59675 retry.go:31] will retry after 2.768108097s: waiting for machine to come up
	I1101 01:00:09.903094   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:09.903460   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 01:00:09.903484   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 01:00:09.903424   59675 retry.go:31] will retry after 3.955575113s: waiting for machine to come up
	I1101 01:00:15.240546   58823 start.go:369] acquired machines lock for "old-k8s-version-330042" in 4m47.663537715s
	I1101 01:00:15.240608   58823 start.go:96] Skipping create...Using existing machine configuration
	I1101 01:00:15.240616   58823 fix.go:54] fixHost starting: 
	I1101 01:00:15.241087   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:00:15.241135   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:00:15.260921   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45157
	I1101 01:00:15.261342   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:00:15.261921   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:00:15.261954   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:00:15.262285   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:00:15.262488   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:00:15.262657   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetState
	I1101 01:00:15.264332   58823 fix.go:102] recreateIfNeeded on old-k8s-version-330042: state=Stopped err=<nil>
	I1101 01:00:15.264357   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	W1101 01:00:15.264541   58823 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 01:00:15.266960   58823 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-330042" ...
	I1101 01:00:13.860184   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.860818   58730 main.go:141] libmachine: (embed-certs-754132) Found IP for machine: 192.168.61.83
	I1101 01:00:13.860849   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has current primary IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.860866   58730 main.go:141] libmachine: (embed-certs-754132) Reserving static IP address...
	I1101 01:00:13.861321   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "embed-certs-754132", mac: "52:54:00:5e:2f:dd", ip: "192.168.61.83"} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:13.861350   58730 main.go:141] libmachine: (embed-certs-754132) Reserved static IP address: 192.168.61.83
	I1101 01:00:13.861362   58730 main.go:141] libmachine: (embed-certs-754132) DBG | skip adding static IP to network mk-embed-certs-754132 - found existing host DHCP lease matching {name: "embed-certs-754132", mac: "52:54:00:5e:2f:dd", ip: "192.168.61.83"}
	I1101 01:00:13.861372   58730 main.go:141] libmachine: (embed-certs-754132) Waiting for SSH to be available...
	I1101 01:00:13.861384   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Getting to WaitForSSH function...
	I1101 01:00:13.864760   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.865204   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:13.865232   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.865368   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Using SSH client type: external
	I1101 01:00:13.865408   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa (-rw-------)
	I1101 01:00:13.865434   58730 main.go:141] libmachine: (embed-certs-754132) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.83 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1101 01:00:13.865446   58730 main.go:141] libmachine: (embed-certs-754132) DBG | About to run SSH command:
	I1101 01:00:13.865454   58730 main.go:141] libmachine: (embed-certs-754132) DBG | exit 0
	I1101 01:00:13.964103   58730 main.go:141] libmachine: (embed-certs-754132) DBG | SSH cmd err, output: <nil>: 
	I1101 01:00:13.964444   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetConfigRaw
	I1101 01:00:13.965066   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetIP
	I1101 01:00:13.967463   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.967768   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:13.967791   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.968100   58730 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/config.json ...
	I1101 01:00:13.968294   58730 machine.go:88] provisioning docker machine ...
	I1101 01:00:13.968312   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:00:13.968530   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetMachineName
	I1101 01:00:13.968707   58730 buildroot.go:166] provisioning hostname "embed-certs-754132"
	I1101 01:00:13.968728   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetMachineName
	I1101 01:00:13.968901   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:13.971288   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.971637   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:13.971676   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.971792   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:13.972000   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:13.972181   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:13.972312   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:13.972476   58730 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:13.972798   58730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I1101 01:00:13.972812   58730 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-754132 && echo "embed-certs-754132" | sudo tee /etc/hostname
	I1101 01:00:14.121000   58730 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-754132
	
	I1101 01:00:14.121036   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:14.124379   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.124813   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:14.124840   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.125085   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:14.125339   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:14.125667   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:14.125832   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:14.126091   58730 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:14.126401   58730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I1101 01:00:14.126418   58730 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-754132' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-754132/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-754132' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 01:00:14.268155   58730 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 01:00:14.268188   58730 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7305/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7305/.minikube}
	I1101 01:00:14.268210   58730 buildroot.go:174] setting up certificates
	I1101 01:00:14.268238   58730 provision.go:83] configureAuth start
	I1101 01:00:14.268255   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetMachineName
	I1101 01:00:14.268542   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetIP
	I1101 01:00:14.271516   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.271946   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:14.271984   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.272150   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:14.274610   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.275017   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:14.275054   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.275206   58730 provision.go:138] copyHostCerts
	I1101 01:00:14.275269   58730 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem, removing ...
	I1101 01:00:14.275282   58730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 01:00:14.275351   58730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem (1078 bytes)
	I1101 01:00:14.275442   58730 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem, removing ...
	I1101 01:00:14.275450   58730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 01:00:14.275475   58730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem (1123 bytes)
	I1101 01:00:14.275526   58730 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem, removing ...
	I1101 01:00:14.275533   58730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 01:00:14.275571   58730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem (1675 bytes)
	I1101 01:00:14.275616   58730 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem org=jenkins.embed-certs-754132 san=[192.168.61.83 192.168.61.83 localhost 127.0.0.1 minikube embed-certs-754132]
	I1101 01:00:14.494175   58730 provision.go:172] copyRemoteCerts
	I1101 01:00:14.494239   58730 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 01:00:14.494265   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:14.496921   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.497263   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:14.497310   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.497482   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:14.497748   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:14.497906   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:14.498052   58730 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa Username:docker}
	I1101 01:00:14.592739   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 01:00:14.614862   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1101 01:00:14.636483   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1101 01:00:14.658154   58730 provision.go:86] duration metric: configureAuth took 389.900669ms
	I1101 01:00:14.658179   58730 buildroot.go:189] setting minikube options for container-runtime
	I1101 01:00:14.658364   58730 config.go:182] Loaded profile config "embed-certs-754132": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:00:14.658478   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:14.661110   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.661450   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:14.661500   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.661667   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:14.661853   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:14.661997   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:14.662120   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:14.662279   58730 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:14.662573   58730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I1101 01:00:14.662589   58730 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 01:00:14.974481   58730 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 01:00:14.974505   58730 machine.go:91] provisioned docker machine in 1.006198078s
	I1101 01:00:14.974521   58730 start.go:300] post-start starting for "embed-certs-754132" (driver="kvm2")
	I1101 01:00:14.974534   58730 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 01:00:14.974556   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:00:14.974913   58730 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 01:00:14.974946   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:14.977485   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.977815   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:14.977846   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.977970   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:14.978146   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:14.978310   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:14.978470   58730 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa Username:docker}
	I1101 01:00:15.073889   58730 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 01:00:15.077710   58730 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 01:00:15.077734   58730 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/addons for local assets ...
	I1101 01:00:15.077791   58730 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/files for local assets ...
	I1101 01:00:15.077855   58730 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> 145042.pem in /etc/ssl/certs
	I1101 01:00:15.077961   58730 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 01:00:15.086567   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:00:15.107446   58730 start.go:303] post-start completed in 132.911351ms
	I1101 01:00:15.107468   58730 fix.go:56] fixHost completed within 21.627074953s
	I1101 01:00:15.107485   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:15.110070   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.110392   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:15.110426   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.110552   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:15.110748   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:15.110914   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:15.111078   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:15.111268   58730 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:15.111683   58730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I1101 01:00:15.111696   58730 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1101 01:00:15.240326   58730 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698800415.188118531
	
	I1101 01:00:15.240357   58730 fix.go:206] guest clock: 1698800415.188118531
	I1101 01:00:15.240365   58730 fix.go:219] Guest: 2023-11-01 01:00:15.188118531 +0000 UTC Remote: 2023-11-01 01:00:15.107470988 +0000 UTC m=+296.909935143 (delta=80.647543ms)
	I1101 01:00:15.240385   58730 fix.go:190] guest clock delta is within tolerance: 80.647543ms
	I1101 01:00:15.240420   58730 start.go:83] releasing machines lock for "embed-certs-754132", held for 21.760022516s
	I1101 01:00:15.240464   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:00:15.240736   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetIP
	I1101 01:00:15.243570   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.243905   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:15.243961   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.244163   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:00:15.244698   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:00:15.244872   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:00:15.244948   58730 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 01:00:15.245012   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:15.245063   58730 ssh_runner.go:195] Run: cat /version.json
	I1101 01:00:15.245089   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:15.247618   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.247886   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.247985   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:15.248018   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.248265   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:15.248358   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:15.248387   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.248422   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:15.248600   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:15.248601   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:15.248774   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:15.248765   58730 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa Username:docker}
	I1101 01:00:15.248913   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:15.249034   58730 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa Username:docker}
	I1101 01:00:15.383514   58730 ssh_runner.go:195] Run: systemctl --version
	I1101 01:00:15.389291   58730 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 01:00:15.531982   58730 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 01:00:15.537622   58730 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 01:00:15.537711   58730 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 01:00:15.554440   58730 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 01:00:15.554488   58730 start.go:472] detecting cgroup driver to use...
	I1101 01:00:15.554549   58730 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 01:00:15.569732   58730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 01:00:15.582752   58730 docker.go:204] disabling cri-docker service (if available) ...
	I1101 01:00:15.582795   58730 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 01:00:15.596221   58730 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 01:00:15.609815   58730 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 01:00:15.717679   58730 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 01:00:15.842128   58730 docker.go:220] disabling docker service ...
	I1101 01:00:15.842203   58730 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 01:00:15.854613   58730 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 01:00:15.869487   58730 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 01:00:15.991107   58730 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 01:00:16.118392   58730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 01:00:16.131570   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 01:00:16.150691   58730 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 01:00:16.150755   58730 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:16.160081   58730 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 01:00:16.160171   58730 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:16.170277   58730 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:16.180469   58730 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:16.189966   58730 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 01:00:16.199465   58730 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 01:00:16.207995   58730 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 01:00:16.208057   58730 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 01:00:16.221491   58730 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 01:00:16.231855   58730 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 01:00:16.355227   58730 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 01:00:16.520341   58730 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 01:00:16.520403   58730 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 01:00:16.525071   58730 start.go:540] Will wait 60s for crictl version
	I1101 01:00:16.525143   58730 ssh_runner.go:195] Run: which crictl
	I1101 01:00:16.529138   58730 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 01:00:16.566007   58730 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1101 01:00:16.566082   58730 ssh_runner.go:195] Run: crio --version
	I1101 01:00:16.612652   58730 ssh_runner.go:195] Run: crio --version
	I1101 01:00:16.665668   58730 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1101 01:00:15.268389   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Start
	I1101 01:00:15.268575   58823 main.go:141] libmachine: (old-k8s-version-330042) Ensuring networks are active...
	I1101 01:00:15.269280   58823 main.go:141] libmachine: (old-k8s-version-330042) Ensuring network default is active
	I1101 01:00:15.269618   58823 main.go:141] libmachine: (old-k8s-version-330042) Ensuring network mk-old-k8s-version-330042 is active
	I1101 01:00:15.270056   58823 main.go:141] libmachine: (old-k8s-version-330042) Getting domain xml...
	I1101 01:00:15.270814   58823 main.go:141] libmachine: (old-k8s-version-330042) Creating domain...
	I1101 01:00:16.566526   58823 main.go:141] libmachine: (old-k8s-version-330042) Waiting to get IP...
	I1101 01:00:16.567713   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:16.568239   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:16.568336   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:16.568220   59797 retry.go:31] will retry after 200.046919ms: waiting for machine to come up
	I1101 01:00:16.769849   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:16.770436   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:16.770477   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:16.770427   59797 retry.go:31] will retry after 301.397937ms: waiting for machine to come up
	I1101 01:00:17.074180   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:17.074657   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:17.074689   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:17.074626   59797 retry.go:31] will retry after 462.511505ms: waiting for machine to come up
	I1101 01:00:16.667657   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetIP
	I1101 01:00:16.670756   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:16.671148   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:16.671216   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:16.671377   58730 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1101 01:00:16.675342   58730 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:00:16.687224   58730 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 01:00:16.687310   58730 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:00:16.726714   58730 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1101 01:00:16.726779   58730 ssh_runner.go:195] Run: which lz4
	I1101 01:00:16.730745   58730 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1101 01:00:16.734588   58730 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 01:00:16.734623   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1101 01:00:17.538840   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:17.539313   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:17.539337   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:17.539276   59797 retry.go:31] will retry after 562.894181ms: waiting for machine to come up
	I1101 01:00:18.104173   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:18.104678   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:18.104712   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:18.104641   59797 retry.go:31] will retry after 659.582768ms: waiting for machine to come up
	I1101 01:00:18.766319   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:18.766719   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:18.766749   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:18.766688   59797 retry.go:31] will retry after 626.783168ms: waiting for machine to come up
	I1101 01:00:19.395203   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:19.395693   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:19.395720   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:19.395651   59797 retry.go:31] will retry after 884.294618ms: waiting for machine to come up
	I1101 01:00:20.281677   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:20.282152   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:20.282176   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:20.282094   59797 retry.go:31] will retry after 997.794459ms: waiting for machine to come up
	I1101 01:00:21.281118   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:21.281568   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:21.281596   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:21.281525   59797 retry.go:31] will retry after 1.624252325s: waiting for machine to come up
	I1101 01:00:18.514400   58730 crio.go:444] Took 1.783693 seconds to copy over tarball
	I1101 01:00:18.514460   58730 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 01:00:21.481089   58730 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.966600648s)
	I1101 01:00:21.481118   58730 crio.go:451] Took 2.966695 seconds to extract the tarball
	I1101 01:00:21.481130   58730 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 01:00:21.520934   58730 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:00:21.568541   58730 crio.go:496] all images are preloaded for cri-o runtime.
	I1101 01:00:21.568569   58730 cache_images.go:84] Images are preloaded, skipping loading
	I1101 01:00:21.568638   58730 ssh_runner.go:195] Run: crio config
	I1101 01:00:21.626687   58730 cni.go:84] Creating CNI manager for ""
	I1101 01:00:21.626707   58730 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:00:21.626724   58730 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 01:00:21.626745   58730 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.83 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-754132 NodeName:embed-certs-754132 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.83"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.83 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 01:00:21.626906   58730 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.83
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-754132"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.83
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.83"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 01:00:21.627000   58730 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-754132 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.83
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:embed-certs-754132 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 01:00:21.627062   58730 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 01:00:21.635965   58730 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 01:00:21.636048   58730 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 01:00:21.644318   58730 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1101 01:00:21.659722   58730 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 01:00:21.674541   58730 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1101 01:00:21.690451   58730 ssh_runner.go:195] Run: grep 192.168.61.83	control-plane.minikube.internal$ /etc/hosts
	I1101 01:00:21.694013   58730 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.83	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:00:21.705929   58730 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132 for IP: 192.168.61.83
	I1101 01:00:21.705978   58730 certs.go:190] acquiring lock for shared ca certs: {Name:mk6185b3938c4b51e80f57b7c81926c81b632b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:00:21.706152   58730 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key
	I1101 01:00:21.706193   58730 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key
	I1101 01:00:21.706255   58730 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/client.key
	I1101 01:00:21.706321   58730 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/apiserver.key.00ce3257
	I1101 01:00:21.706365   58730 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/proxy-client.key
	I1101 01:00:21.706507   58730 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem (1338 bytes)
	W1101 01:00:21.706541   58730 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504_empty.pem, impossibly tiny 0 bytes
	I1101 01:00:21.706552   58730 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 01:00:21.706580   58730 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem (1078 bytes)
	I1101 01:00:21.706606   58730 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem (1123 bytes)
	I1101 01:00:21.706633   58730 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem (1675 bytes)
	I1101 01:00:21.706670   58730 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:00:21.707263   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 01:00:21.734199   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 01:00:21.760230   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 01:00:21.787083   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 01:00:21.810498   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 01:00:21.833905   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 01:00:21.859073   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 01:00:21.881222   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 01:00:21.904432   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem --> /usr/share/ca-certificates/14504.pem (1338 bytes)
	I1101 01:00:21.934873   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /usr/share/ca-certificates/145042.pem (1708 bytes)
	I1101 01:00:21.958353   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 01:00:21.981353   58730 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 01:00:21.997436   58730 ssh_runner.go:195] Run: openssl version
	I1101 01:00:22.003487   58730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14504.pem && ln -fs /usr/share/ca-certificates/14504.pem /etc/ssl/certs/14504.pem"
	I1101 01:00:22.013829   58730 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14504.pem
	I1101 01:00:22.018482   58730 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:54 /usr/share/ca-certificates/14504.pem
	I1101 01:00:22.018554   58730 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14504.pem
	I1101 01:00:22.024695   58730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14504.pem /etc/ssl/certs/51391683.0"
	I1101 01:00:22.034956   58730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145042.pem && ln -fs /usr/share/ca-certificates/145042.pem /etc/ssl/certs/145042.pem"
	I1101 01:00:22.046182   58730 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145042.pem
	I1101 01:00:22.051197   58730 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:54 /usr/share/ca-certificates/145042.pem
	I1101 01:00:22.051273   58730 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145042.pem
	I1101 01:00:22.057145   58730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145042.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 01:00:22.067337   58730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 01:00:22.077300   58730 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:00:22.081973   58730 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:00:22.082025   58730 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:00:22.087341   58730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 01:00:22.097021   58730 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 01:00:22.101801   58730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 01:00:22.107498   58730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 01:00:22.113187   58730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 01:00:22.119281   58730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 01:00:22.125109   58730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 01:00:22.130878   58730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 01:00:22.136711   58730 kubeadm.go:404] StartCluster: {Name:embed-certs-754132 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:embed-certs-754132 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.83 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 01:00:22.136843   58730 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 01:00:22.136898   58730 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:00:22.172188   58730 cri.go:89] found id: ""
	I1101 01:00:22.172267   58730 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 01:00:22.181863   58730 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1101 01:00:22.181901   58730 kubeadm.go:636] restartCluster start
	I1101 01:00:22.181962   58730 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 01:00:22.190970   58730 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:22.192108   58730 kubeconfig.go:92] found "embed-certs-754132" server: "https://192.168.61.83:8443"
	I1101 01:00:22.194633   58730 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 01:00:22.203708   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:22.203792   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:22.214867   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:22.214889   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:22.214972   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:22.225940   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:22.726677   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:22.726769   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:22.737874   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:23.226416   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:23.226492   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:23.237902   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:22.907053   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:22.907532   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:22.907563   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:22.907487   59797 retry.go:31] will retry after 2.170221456s: waiting for machine to come up
	I1101 01:00:25.079354   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:25.079791   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:25.079831   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:25.079754   59797 retry.go:31] will retry after 2.279141994s: waiting for machine to come up
	I1101 01:00:27.361955   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:27.362423   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:27.362456   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:27.362368   59797 retry.go:31] will retry after 2.772425742s: waiting for machine to come up
	I1101 01:00:23.726108   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:23.726179   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:23.737404   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:24.226007   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:24.226178   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:24.237401   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:24.727058   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:24.727152   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:24.742704   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:25.226166   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:25.226272   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:25.237808   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:25.726161   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:25.726244   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:25.737763   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:26.226321   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:26.226485   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:26.239919   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:26.726488   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:26.726596   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:26.740719   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:27.226157   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:27.226268   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:27.240719   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:27.726272   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:27.726360   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:27.738068   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:28.226882   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:28.226954   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:28.239208   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:30.136893   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:30.137311   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:30.137333   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:30.137274   59797 retry.go:31] will retry after 4.191062934s: waiting for machine to come up
	I1101 01:00:28.726726   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:28.726845   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:28.737955   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:29.226410   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:29.226475   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:29.237886   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:29.726367   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:29.726461   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:29.737767   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:30.226294   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:30.226389   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:30.237767   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:30.726295   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:30.726363   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:30.737691   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:31.226274   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:31.226343   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:31.237801   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:31.726297   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:31.726366   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:31.738060   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:32.204696   58730 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1101 01:00:32.204729   58730 kubeadm.go:1128] stopping kube-system containers ...
	I1101 01:00:32.204741   58730 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 01:00:32.204792   58730 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:00:32.241943   58730 cri.go:89] found id: ""
	I1101 01:00:32.242012   58730 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 01:00:32.256657   58730 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:00:32.265087   58730 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:00:32.265159   58730 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:00:32.273631   58730 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1101 01:00:32.273654   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:32.379073   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:35.634014   59148 start.go:369] acquired machines lock for "default-k8s-diff-port-639310" in 4m10.491521982s
	I1101 01:00:35.634070   59148 start.go:96] Skipping create...Using existing machine configuration
	I1101 01:00:35.634078   59148 fix.go:54] fixHost starting: 
	I1101 01:00:35.634533   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:00:35.634577   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:00:35.654259   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46439
	I1101 01:00:35.654746   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:00:35.655216   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:00:35.655245   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:00:35.655578   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:00:35.655759   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:00:35.655905   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetState
	I1101 01:00:35.657604   59148 fix.go:102] recreateIfNeeded on default-k8s-diff-port-639310: state=Stopped err=<nil>
	I1101 01:00:35.657646   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	W1101 01:00:35.657804   59148 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 01:00:35.660028   59148 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-639310" ...
	I1101 01:00:34.332963   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.333486   58823 main.go:141] libmachine: (old-k8s-version-330042) Found IP for machine: 192.168.39.90
	I1101 01:00:34.333518   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has current primary IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.333529   58823 main.go:141] libmachine: (old-k8s-version-330042) Reserving static IP address...
	I1101 01:00:34.333853   58823 main.go:141] libmachine: (old-k8s-version-330042) Reserved static IP address: 192.168.39.90
	I1101 01:00:34.333874   58823 main.go:141] libmachine: (old-k8s-version-330042) Waiting for SSH to be available...
	I1101 01:00:34.333901   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "old-k8s-version-330042", mac: "52:54:00:a2:40:80", ip: "192.168.39.90"} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.333932   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | skip adding static IP to network mk-old-k8s-version-330042 - found existing host DHCP lease matching {name: "old-k8s-version-330042", mac: "52:54:00:a2:40:80", ip: "192.168.39.90"}
	I1101 01:00:34.333954   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Getting to WaitForSSH function...
	I1101 01:00:34.335871   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.336238   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.336275   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.336409   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Using SSH client type: external
	I1101 01:00:34.336446   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa (-rw-------)
	I1101 01:00:34.336480   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.90 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1101 01:00:34.336501   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | About to run SSH command:
	I1101 01:00:34.336523   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | exit 0
	I1101 01:00:34.431938   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | SSH cmd err, output: <nil>: 
	I1101 01:00:34.432324   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetConfigRaw
	I1101 01:00:34.433070   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetIP
	I1101 01:00:34.435967   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.436402   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.436434   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.436696   58823 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/config.json ...
	I1101 01:00:34.436886   58823 machine.go:88] provisioning docker machine ...
	I1101 01:00:34.436903   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:00:34.437136   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetMachineName
	I1101 01:00:34.437299   58823 buildroot.go:166] provisioning hostname "old-k8s-version-330042"
	I1101 01:00:34.437323   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetMachineName
	I1101 01:00:34.437508   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:34.439785   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.440175   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.440215   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.440316   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:34.440481   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:34.440662   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:34.440800   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:34.440965   58823 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:34.441440   58823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1101 01:00:34.441461   58823 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-330042 && echo "old-k8s-version-330042" | sudo tee /etc/hostname
	I1101 01:00:34.590132   58823 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-330042
	
	I1101 01:00:34.590168   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:34.593018   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.593457   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.593521   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.593623   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:34.593817   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:34.594004   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:34.594151   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:34.594317   58823 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:34.594622   58823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1101 01:00:34.594640   58823 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-330042' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-330042/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-330042' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 01:00:34.743448   58823 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 01:00:34.743485   58823 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7305/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7305/.minikube}
	I1101 01:00:34.743510   58823 buildroot.go:174] setting up certificates
	I1101 01:00:34.743530   58823 provision.go:83] configureAuth start
	I1101 01:00:34.743545   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetMachineName
	I1101 01:00:34.743848   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetIP
	I1101 01:00:34.746932   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.747302   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.747333   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.747478   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:34.749794   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.750154   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.750185   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.750339   58823 provision.go:138] copyHostCerts
	I1101 01:00:34.750412   58823 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem, removing ...
	I1101 01:00:34.750435   58823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 01:00:34.750504   58823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem (1078 bytes)
	I1101 01:00:34.750620   58823 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem, removing ...
	I1101 01:00:34.750628   58823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 01:00:34.750655   58823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem (1123 bytes)
	I1101 01:00:34.750726   58823 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem, removing ...
	I1101 01:00:34.750736   58823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 01:00:34.750761   58823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem (1675 bytes)
	I1101 01:00:34.750820   58823 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-330042 san=[192.168.39.90 192.168.39.90 localhost 127.0.0.1 minikube old-k8s-version-330042]
	I1101 01:00:34.819269   58823 provision.go:172] copyRemoteCerts
	I1101 01:00:34.819327   58823 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 01:00:34.819354   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:34.822409   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.822852   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.822887   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.823101   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:34.823335   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:34.823520   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:34.823688   58823 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa Username:docker}
	I1101 01:00:34.928534   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 01:00:34.955140   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1101 01:00:34.982361   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 01:00:35.007980   58823 provision.go:86] duration metric: configureAuth took 264.432358ms
	I1101 01:00:35.008007   58823 buildroot.go:189] setting minikube options for container-runtime
	I1101 01:00:35.008317   58823 config.go:182] Loaded profile config "old-k8s-version-330042": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1101 01:00:35.008450   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:35.011424   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.011790   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:35.011820   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.012054   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:35.012305   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:35.012505   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:35.012692   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:35.012898   58823 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:35.013292   58823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1101 01:00:35.013310   58823 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 01:00:35.345179   58823 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 01:00:35.345210   58823 machine.go:91] provisioned docker machine in 908.310008ms
	I1101 01:00:35.345224   58823 start.go:300] post-start starting for "old-k8s-version-330042" (driver="kvm2")
	I1101 01:00:35.345236   58823 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 01:00:35.345283   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:00:35.345634   58823 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 01:00:35.345666   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:35.348576   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.348945   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:35.348978   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.349171   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:35.349364   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:35.349527   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:35.349672   58823 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa Username:docker}
	I1101 01:00:35.448239   58823 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 01:00:35.453459   58823 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 01:00:35.453495   58823 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/addons for local assets ...
	I1101 01:00:35.453589   58823 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/files for local assets ...
	I1101 01:00:35.453705   58823 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> 145042.pem in /etc/ssl/certs
	I1101 01:00:35.453819   58823 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 01:00:35.464658   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:00:35.488669   58823 start.go:303] post-start completed in 143.429717ms
	I1101 01:00:35.488699   58823 fix.go:56] fixHost completed within 20.248082329s
	I1101 01:00:35.488723   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:35.491535   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.491917   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:35.491962   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.492108   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:35.492302   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:35.492472   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:35.492610   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:35.492777   58823 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:35.493085   58823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1101 01:00:35.493097   58823 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1101 01:00:35.633831   58823 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698800435.580601462
	
	I1101 01:00:35.633860   58823 fix.go:206] guest clock: 1698800435.580601462
	I1101 01:00:35.633872   58823 fix.go:219] Guest: 2023-11-01 01:00:35.580601462 +0000 UTC Remote: 2023-11-01 01:00:35.488703086 +0000 UTC m=+308.076532844 (delta=91.898376ms)
	I1101 01:00:35.633899   58823 fix.go:190] guest clock delta is within tolerance: 91.898376ms
	I1101 01:00:35.633906   58823 start.go:83] releasing machines lock for "old-k8s-version-330042", held for 20.393324923s
	I1101 01:00:35.633937   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:00:35.634276   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetIP
	I1101 01:00:35.637052   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.637411   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:35.637462   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.637668   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:00:35.638239   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:00:35.638479   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:00:35.638661   58823 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 01:00:35.638703   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:35.638792   58823 ssh_runner.go:195] Run: cat /version.json
	I1101 01:00:35.638813   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:35.641913   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:35.641919   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.642071   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:35.642094   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.642106   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.642151   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:35.642323   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:35.642517   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:35.642547   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.642608   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:35.642640   58823 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa Username:docker}
	I1101 01:00:35.642736   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:35.642872   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:35.642994   58823 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa Username:docker}
	I1101 01:00:35.772469   58823 ssh_runner.go:195] Run: systemctl --version
	I1101 01:00:35.778377   58823 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 01:00:35.930189   58823 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 01:00:35.937481   58823 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 01:00:35.937583   58823 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 01:00:35.959054   58823 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 01:00:35.959081   58823 start.go:472] detecting cgroup driver to use...
	I1101 01:00:35.959166   58823 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 01:00:35.978338   58823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 01:00:35.994627   58823 docker.go:204] disabling cri-docker service (if available) ...
	I1101 01:00:35.994690   58823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 01:00:36.010212   58823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 01:00:36.025616   58823 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 01:00:36.132484   58823 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 01:00:36.266531   58823 docker.go:220] disabling docker service ...
	I1101 01:00:36.266604   58823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 01:00:36.280303   58823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 01:00:36.291905   58823 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 01:00:36.413114   58823 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 01:00:36.527297   58823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 01:00:36.540547   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 01:00:36.561997   58823 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1101 01:00:36.562070   58823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:36.574735   58823 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 01:00:36.574809   58823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:36.584015   58823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:36.592896   58823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:36.602199   58823 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 01:00:36.611742   58823 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 01:00:36.620073   58823 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 01:00:36.620140   58823 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 01:00:36.633237   58823 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 01:00:36.641679   58823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 01:00:36.786323   58823 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 01:00:37.011240   58823 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 01:00:37.011332   58823 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 01:00:37.016349   58823 start.go:540] Will wait 60s for crictl version
	I1101 01:00:37.016417   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:37.020952   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 01:00:37.068566   58823 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1101 01:00:37.068649   58823 ssh_runner.go:195] Run: crio --version
	I1101 01:00:37.119257   58823 ssh_runner.go:195] Run: crio --version
	I1101 01:00:37.170471   58823 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1101 01:00:37.172128   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetIP
	I1101 01:00:37.175116   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:37.175552   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:37.175583   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:37.175834   58823 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1101 01:00:37.179970   58823 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:00:37.193466   58823 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1101 01:00:37.193550   58823 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:00:37.239780   58823 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1101 01:00:37.239851   58823 ssh_runner.go:195] Run: which lz4
	I1101 01:00:37.243871   58823 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1101 01:00:37.248203   58823 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 01:00:37.248243   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1101 01:00:33.273385   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:33.468847   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:33.558663   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:33.632226   58730 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:00:33.632305   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:33.645291   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:34.159920   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:34.660339   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:35.159837   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:35.659362   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:36.159870   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:36.189698   58730 api_server.go:72] duration metric: took 2.557471176s to wait for apiserver process to appear ...
	I1101 01:00:36.189726   58730 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:00:36.189746   58730 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8443/healthz ...
	I1101 01:00:35.662001   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Start
	I1101 01:00:35.662248   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Ensuring networks are active...
	I1101 01:00:35.663075   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Ensuring network default is active
	I1101 01:00:35.663589   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Ensuring network mk-default-k8s-diff-port-639310 is active
	I1101 01:00:35.664066   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Getting domain xml...
	I1101 01:00:35.664780   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Creating domain...
	I1101 01:00:37.046385   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting to get IP...
	I1101 01:00:37.047592   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.048056   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.048160   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:37.048064   59967 retry.go:31] will retry after 244.19131ms: waiting for machine to come up
	I1101 01:00:37.293636   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.294421   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.294535   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:37.294483   59967 retry.go:31] will retry after 281.302105ms: waiting for machine to come up
	I1101 01:00:37.577271   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.577934   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.577962   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:37.577874   59967 retry.go:31] will retry after 376.713113ms: waiting for machine to come up
	I1101 01:00:37.956666   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.957154   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.957182   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:37.957125   59967 retry.go:31] will retry after 366.92844ms: waiting for machine to come up
	I1101 01:00:38.325741   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:38.326257   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:38.326291   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:38.326226   59967 retry.go:31] will retry after 478.435824ms: waiting for machine to come up
	I1101 01:00:38.806215   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:38.806928   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:38.806965   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:38.806904   59967 retry.go:31] will retry after 910.120665ms: waiting for machine to come up
	I1101 01:00:39.718641   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:39.719281   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:39.719307   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:39.719210   59967 retry.go:31] will retry after 1.017844602s: waiting for machine to come up
	I1101 01:00:40.636542   58730 api_server.go:279] https://192.168.61.83:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 01:00:40.636586   58730 api_server.go:103] status: https://192.168.61.83:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 01:00:40.636602   58730 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8443/healthz ...
	I1101 01:00:40.687211   58730 api_server.go:279] https://192.168.61.83:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 01:00:40.687258   58730 api_server.go:103] status: https://192.168.61.83:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 01:00:41.187988   58730 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8443/healthz ...
	I1101 01:00:41.197585   58730 api_server.go:279] https://192.168.61.83:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:00:41.197626   58730 api_server.go:103] status: https://192.168.61.83:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:00:41.688019   58730 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8443/healthz ...
	I1101 01:00:41.698406   58730 api_server.go:279] https://192.168.61.83:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:00:41.698439   58730 api_server.go:103] status: https://192.168.61.83:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:00:42.188141   58730 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8443/healthz ...
	I1101 01:00:42.195663   58730 api_server.go:279] https://192.168.61.83:8443/healthz returned 200:
	ok
	I1101 01:00:42.204715   58730 api_server.go:141] control plane version: v1.28.3
	I1101 01:00:42.204746   58730 api_server.go:131] duration metric: took 6.015012484s to wait for apiserver health ...
	I1101 01:00:42.204756   58730 cni.go:84] Creating CNI manager for ""
	I1101 01:00:42.204764   58730 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:00:42.206831   58730 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:00:38.979032   58823 crio.go:444] Took 1.735199 seconds to copy over tarball
	I1101 01:00:38.979127   58823 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 01:00:42.235526   58823 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.256363592s)
	I1101 01:00:42.235558   58823 crio.go:451] Took 3.256498 seconds to extract the tarball
	I1101 01:00:42.235592   58823 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 01:00:42.278508   58823 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:00:42.332199   58823 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1101 01:00:42.332225   58823 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1101 01:00:42.332323   58823 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:00:42.332383   58823 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1101 01:00:42.332425   58823 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1101 01:00:42.332445   58823 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1101 01:00:42.332394   58823 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1101 01:00:42.332554   58823 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1101 01:00:42.332552   58823 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1101 01:00:42.332611   58823 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1101 01:00:42.333952   58823 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1101 01:00:42.333965   58823 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1101 01:00:42.333971   58823 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1101 01:00:42.333973   58823 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:00:42.333951   58823 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1101 01:00:42.333959   58823 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1101 01:00:42.334015   58823 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1101 01:00:42.334422   58823 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1101 01:00:42.208425   58730 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:00:42.243672   58730 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:00:42.270472   58730 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:00:40.739283   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:40.739839   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:40.739871   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:40.739751   59967 retry.go:31] will retry after 924.830892ms: waiting for machine to come up
	I1101 01:00:41.666231   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:41.666922   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:41.666949   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:41.666878   59967 retry.go:31] will retry after 1.792434708s: waiting for machine to come up
	I1101 01:00:43.461158   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:43.461723   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:43.461758   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:43.461651   59967 retry.go:31] will retry after 1.458280506s: waiting for machine to come up
	I1101 01:00:44.921321   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:44.922072   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:44.922105   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:44.922018   59967 retry.go:31] will retry after 2.732488928s: waiting for machine to come up
	I1101 01:00:42.548949   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1101 01:00:42.549011   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1101 01:00:42.552787   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1101 01:00:42.554125   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1101 01:00:42.559301   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1101 01:00:42.560733   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1101 01:00:42.564609   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1101 01:00:42.857456   58823 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1101 01:00:42.857497   58823 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1101 01:00:42.857537   58823 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1101 01:00:42.857565   58823 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1101 01:00:42.857583   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:42.857502   58823 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1101 01:00:42.857597   58823 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1101 01:00:42.857644   58823 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1101 01:00:42.857703   58823 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1101 01:00:42.857733   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:42.857663   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:42.857666   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:42.880301   58823 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1101 01:00:42.880350   58823 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1101 01:00:42.880362   58823 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1101 01:00:42.880404   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:42.880421   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1101 01:00:42.880432   58823 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1101 01:00:42.880473   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:42.880475   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1101 01:00:42.880542   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1101 01:00:42.880377   58823 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1101 01:00:42.880587   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1101 01:00:42.880610   58823 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1101 01:00:42.880663   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:42.958449   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1101 01:00:42.975151   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1101 01:00:42.975188   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1101 01:00:42.979136   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1101 01:00:42.979198   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1101 01:00:42.979246   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1101 01:00:42.979306   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1101 01:00:43.059447   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1101 01:00:43.059470   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1101 01:00:43.059515   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1101 01:00:43.059572   58823 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1101 01:00:43.065313   58823 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1101 01:00:43.065337   58823 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1101 01:00:43.065397   58823 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1101 01:00:43.212775   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:00:44.821509   58823 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.756075689s)
	I1101 01:00:44.821542   58823 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1101 01:00:44.821600   58823 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.608800531s)
	I1101 01:00:44.821639   58823 cache_images.go:92] LoadImages completed in 2.489401317s
	W1101 01:00:44.821749   58823 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0: no such file or directory
	I1101 01:00:44.821888   58823 ssh_runner.go:195] Run: crio config
	I1101 01:00:44.911017   58823 cni.go:84] Creating CNI manager for ""
	I1101 01:00:44.911094   58823 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:00:44.911132   58823 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 01:00:44.911173   58823 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.90 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-330042 NodeName:old-k8s-version-330042 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1101 01:00:44.911365   58823 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-330042"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.90
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.90"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-330042
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.90:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 01:00:44.911510   58823 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-330042 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-330042 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 01:00:44.911601   58823 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1101 01:00:44.925733   58823 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 01:00:44.925810   58823 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 01:00:44.939166   58823 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1101 01:00:44.962847   58823 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 01:00:44.986855   58823 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I1101 01:00:45.011998   58823 ssh_runner.go:195] Run: grep 192.168.39.90	control-plane.minikube.internal$ /etc/hosts
	I1101 01:00:45.017160   58823 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.90	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:00:45.035826   58823 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042 for IP: 192.168.39.90
	I1101 01:00:45.035866   58823 certs.go:190] acquiring lock for shared ca certs: {Name:mk6185b3938c4b51e80f57b7c81926c81b632b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:00:45.036097   58823 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key
	I1101 01:00:45.036161   58823 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key
	I1101 01:00:45.036276   58823 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/client.key
	I1101 01:00:45.036363   58823 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/apiserver.key.05a13cdc
	I1101 01:00:45.036423   58823 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/proxy-client.key
	I1101 01:00:45.036600   58823 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem (1338 bytes)
	W1101 01:00:45.036642   58823 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504_empty.pem, impossibly tiny 0 bytes
	I1101 01:00:45.036657   58823 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 01:00:45.036697   58823 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem (1078 bytes)
	I1101 01:00:45.036734   58823 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem (1123 bytes)
	I1101 01:00:45.036769   58823 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem (1675 bytes)
	I1101 01:00:45.036837   58823 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:00:45.037808   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 01:00:45.071828   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 01:00:45.105069   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 01:00:45.136650   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 01:00:45.169633   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 01:00:45.202102   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 01:00:45.234227   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 01:00:45.265901   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 01:00:45.297720   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem --> /usr/share/ca-certificates/14504.pem (1338 bytes)
	I1101 01:00:45.330915   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /usr/share/ca-certificates/145042.pem (1708 bytes)
	I1101 01:00:45.361364   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 01:00:45.391023   58823 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 01:00:45.412643   58823 ssh_runner.go:195] Run: openssl version
	I1101 01:00:45.419938   58823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145042.pem && ln -fs /usr/share/ca-certificates/145042.pem /etc/ssl/certs/145042.pem"
	I1101 01:00:45.433972   58823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145042.pem
	I1101 01:00:45.439966   58823 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:54 /usr/share/ca-certificates/145042.pem
	I1101 01:00:45.440070   58823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145042.pem
	I1101 01:00:45.447248   58823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145042.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 01:00:45.461261   58823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 01:00:45.475166   58823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:00:45.481174   58823 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:00:45.481281   58823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:00:45.488190   58823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 01:00:45.502428   58823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14504.pem && ln -fs /usr/share/ca-certificates/14504.pem /etc/ssl/certs/14504.pem"
	I1101 01:00:45.515353   58823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14504.pem
	I1101 01:00:45.520135   58823 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:54 /usr/share/ca-certificates/14504.pem
	I1101 01:00:45.520196   58823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14504.pem
	I1101 01:00:45.525605   58823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14504.pem /etc/ssl/certs/51391683.0"
	I1101 01:00:45.535886   58823 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 01:00:45.540671   58823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 01:00:45.546973   58823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 01:00:45.554439   58823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 01:00:45.562216   58823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 01:00:45.570082   58823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 01:00:45.578073   58823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 01:00:45.586056   58823 kubeadm.go:404] StartCluster: {Name:old-k8s-version-330042 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-330042 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 01:00:45.586202   58823 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 01:00:45.586270   58823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:00:45.632205   58823 cri.go:89] found id: ""
	I1101 01:00:45.632279   58823 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 01:00:45.646397   58823 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1101 01:00:45.646432   58823 kubeadm.go:636] restartCluster start
	I1101 01:00:45.646492   58823 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 01:00:45.660754   58823 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:45.662302   58823 kubeconfig.go:92] found "old-k8s-version-330042" server: "https://192.168.39.90:8443"
	I1101 01:00:45.665617   58823 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 01:00:45.679127   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:45.679203   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:45.697578   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:45.697601   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:45.697662   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:45.715086   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:46.215841   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:46.215939   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:46.233039   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:46.715162   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:46.715283   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:46.727101   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:47.215409   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:47.215512   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:47.228104   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:43.297105   58730 system_pods.go:59] 9 kube-system pods found
	I1101 01:00:43.452043   58730 system_pods.go:61] "coredns-5dd5756b68-9hvh7" [d7d126c2-c270-452c-b939-15303a174742] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 01:00:43.452062   58730 system_pods.go:61] "coredns-5dd5756b68-gptmc" [fbbb9f17-32d6-456d-8171-eadcf64b11a8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 01:00:43.452074   58730 system_pods.go:61] "etcd-embed-certs-754132" [3c7474c1-788e-461d-bd20-e62c3c12cf27] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 01:00:43.452086   58730 system_pods.go:61] "kube-apiserver-embed-certs-754132" [d218a8d6-536c-400a-b81e-325b89ab475b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 01:00:43.452116   58730 system_pods.go:61] "kube-controller-manager-embed-certs-754132" [930b7861-b807-4f24-ba3c-9365a1e8dd8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 01:00:43.452128   58730 system_pods.go:61] "kube-proxy-d5d5x" [c7a6d923-0b37-452b-9979-0a64c05ee737] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 01:00:43.452142   58730 system_pods.go:61] "kube-scheduler-embed-certs-754132" [fd9c0833-f9d4-41cf-b5dd-b676ea5da6ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 01:00:43.452156   58730 system_pods.go:61] "metrics-server-57f55c9bc5-znchz" [60da0fbf-a2c4-4910-b06b-251b33b8ad0b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:00:43.452169   58730 system_pods.go:61] "storage-provisioner" [fbece4fb-6f83-4f17-acb8-94f493dd72e9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 01:00:43.452185   58730 system_pods.go:74] duration metric: took 1.181683794s to wait for pod list to return data ...
	I1101 01:00:43.452198   58730 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:00:44.181694   58730 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:00:44.181739   58730 node_conditions.go:123] node cpu capacity is 2
	I1101 01:00:44.181756   58730 node_conditions.go:105] duration metric: took 729.549671ms to run NodePressure ...
	I1101 01:00:44.181784   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:45.274729   58730 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.092921592s)
	I1101 01:00:45.274761   58730 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1101 01:00:45.285444   58730 kubeadm.go:787] kubelet initialised
	I1101 01:00:45.285478   58730 kubeadm.go:788] duration metric: took 10.704919ms waiting for restarted kubelet to initialise ...
	I1101 01:00:45.285489   58730 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:00:45.303122   58730 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-9hvh7" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:47.333376   58730 pod_ready.go:92] pod "coredns-5dd5756b68-9hvh7" in "kube-system" namespace has status "Ready":"True"
	I1101 01:00:47.333404   58730 pod_ready.go:81] duration metric: took 2.030252648s waiting for pod "coredns-5dd5756b68-9hvh7" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:47.333415   58730 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-gptmc" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:47.339165   58730 pod_ready.go:92] pod "coredns-5dd5756b68-gptmc" in "kube-system" namespace has status "Ready":"True"
	I1101 01:00:47.339189   58730 pod_ready.go:81] duration metric: took 5.76803ms waiting for pod "coredns-5dd5756b68-gptmc" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:47.339201   58730 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:47.656259   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:47.656733   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:47.656767   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:47.656688   59967 retry.go:31] will retry after 3.546373187s: waiting for machine to come up
	I1101 01:00:47.716219   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:47.716302   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:47.729221   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:48.215453   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:48.215562   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:48.230259   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:48.715905   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:48.716035   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:48.729001   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:49.216123   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:49.216217   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:49.232128   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:49.715640   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:49.715708   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:49.729098   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:50.215271   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:50.215379   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:50.228075   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:50.715151   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:50.715256   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:50.726839   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:51.215204   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:51.215293   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:51.227412   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:51.715753   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:51.715870   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:51.728794   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:52.215318   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:52.215437   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:52.227527   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:48.860188   58730 pod_ready.go:92] pod "etcd-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:00:48.860215   58730 pod_ready.go:81] duration metric: took 1.521005544s waiting for pod "etcd-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:48.860228   58730 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:50.286848   58730 pod_ready.go:92] pod "kube-apiserver-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:00:50.286882   58730 pod_ready.go:81] duration metric: took 1.426640629s waiting for pod "kube-apiserver-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:50.286894   58730 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:51.886531   58730 pod_ready.go:92] pod "kube-controller-manager-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:00:51.886555   58730 pod_ready.go:81] duration metric: took 1.599653882s waiting for pod "kube-controller-manager-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:51.886565   58730 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-d5d5x" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:52.079723   58730 pod_ready.go:92] pod "kube-proxy-d5d5x" in "kube-system" namespace has status "Ready":"True"
	I1101 01:00:52.079752   58730 pod_ready.go:81] duration metric: took 193.181169ms waiting for pod "kube-proxy-d5d5x" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:52.079766   58730 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:51.204423   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:51.204909   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:51.204945   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:51.204854   59967 retry.go:31] will retry after 3.382936792s: waiting for machine to come up
	I1101 01:00:54.588976   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.589398   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Found IP for machine: 192.168.72.97
	I1101 01:00:54.589427   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Reserving static IP address...
	I1101 01:00:54.589447   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has current primary IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.589764   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Reserved static IP address: 192.168.72.97
	I1101 01:00:54.589783   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for SSH to be available...
	I1101 01:00:54.589811   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-639310", mac: "52:54:00:83:e0:44", ip: "192.168.72.97"} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:54.589841   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | skip adding static IP to network mk-default-k8s-diff-port-639310 - found existing host DHCP lease matching {name: "default-k8s-diff-port-639310", mac: "52:54:00:83:e0:44", ip: "192.168.72.97"}
	I1101 01:00:54.589858   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | Getting to WaitForSSH function...
	I1101 01:00:54.591920   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.592295   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:54.592327   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.592518   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | Using SSH client type: external
	I1101 01:00:54.592546   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa (-rw-------)
	I1101 01:00:54.592568   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.97 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1101 01:00:54.592581   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | About to run SSH command:
	I1101 01:00:54.592605   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | exit 0
	I1101 01:00:54.687664   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | SSH cmd err, output: <nil>: 
	I1101 01:00:54.688005   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetConfigRaw
	I1101 01:00:54.688653   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetIP
	I1101 01:00:54.691258   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.691761   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:54.691803   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.692096   59148 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/config.json ...
	I1101 01:00:54.692278   59148 machine.go:88] provisioning docker machine ...
	I1101 01:00:54.692297   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:00:54.692554   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetMachineName
	I1101 01:00:54.692765   59148 buildroot.go:166] provisioning hostname "default-k8s-diff-port-639310"
	I1101 01:00:54.692787   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetMachineName
	I1101 01:00:54.692962   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:54.695491   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.695887   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:54.695917   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.696074   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:54.696280   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:54.696477   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:54.696624   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:54.696817   59148 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:54.697275   59148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.97 22 <nil> <nil>}
	I1101 01:00:54.697298   59148 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-639310 && echo "default-k8s-diff-port-639310" | sudo tee /etc/hostname
	I1101 01:00:54.836084   59148 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-639310
	
	I1101 01:00:54.836118   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:54.839109   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.839437   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:54.839463   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.839732   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:54.839986   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:54.840131   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:54.840290   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:54.840501   59148 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:54.840865   59148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.97 22 <nil> <nil>}
	I1101 01:00:54.840885   59148 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-639310' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-639310/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-639310' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 01:00:54.979804   59148 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 01:00:54.979841   59148 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7305/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7305/.minikube}
	I1101 01:00:54.979870   59148 buildroot.go:174] setting up certificates
	I1101 01:00:54.979881   59148 provision.go:83] configureAuth start
	I1101 01:00:54.979898   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetMachineName
	I1101 01:00:54.980246   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetIP
	I1101 01:00:54.983397   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.983760   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:54.983794   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.984029   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:54.986746   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.987112   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:54.987160   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.987328   59148 provision.go:138] copyHostCerts
	I1101 01:00:54.987418   59148 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem, removing ...
	I1101 01:00:54.987437   59148 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 01:00:54.987507   59148 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem (1078 bytes)
	I1101 01:00:54.987619   59148 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem, removing ...
	I1101 01:00:54.987628   59148 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 01:00:54.987651   59148 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem (1123 bytes)
	I1101 01:00:54.987707   59148 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem, removing ...
	I1101 01:00:54.987714   59148 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 01:00:54.987731   59148 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem (1675 bytes)
	I1101 01:00:54.987773   59148 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-639310 san=[192.168.72.97 192.168.72.97 localhost 127.0.0.1 minikube default-k8s-diff-port-639310]
	I1101 01:00:56.081549   58676 start.go:369] acquired machines lock for "no-preload-008483" in 57.600332472s
	I1101 01:00:56.081600   58676 start.go:96] Skipping create...Using existing machine configuration
	I1101 01:00:56.081611   58676 fix.go:54] fixHost starting: 
	I1101 01:00:56.082003   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:00:56.082041   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:00:56.098896   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33091
	I1101 01:00:56.099300   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:00:56.099786   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:00:56.099817   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:00:56.100159   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:00:56.100364   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:00:56.100511   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetState
	I1101 01:00:56.104041   58676 fix.go:102] recreateIfNeeded on no-preload-008483: state=Stopped err=<nil>
	I1101 01:00:56.104071   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	W1101 01:00:56.104250   58676 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 01:00:56.106287   58676 out.go:177] * Restarting existing kvm2 VM for "no-preload-008483" ...
	I1101 01:00:52.715585   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:52.715665   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:52.726877   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:53.216119   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:53.216202   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:53.228700   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:53.715253   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:53.715342   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:53.729029   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:54.215451   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:54.215554   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:54.228217   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:54.715451   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:54.715513   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:54.727356   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:55.216034   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:55.216130   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:55.227905   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:55.680067   58823 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1101 01:00:55.680120   58823 kubeadm.go:1128] stopping kube-system containers ...
	I1101 01:00:55.680135   58823 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 01:00:55.680204   58823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:00:55.726658   58823 cri.go:89] found id: ""
	I1101 01:00:55.726744   58823 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 01:00:55.748477   58823 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:00:55.758933   58823 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:00:55.759013   58823 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:00:55.769130   58823 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1101 01:00:55.769156   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:55.911136   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:57.164062   58823 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.252874473s)
	I1101 01:00:57.164095   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:57.403267   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:55.270327   59148 provision.go:172] copyRemoteCerts
	I1101 01:00:55.270394   59148 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 01:00:55.270418   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:55.272988   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.273410   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:55.273444   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.273609   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:55.273818   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:55.273966   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:55.274113   59148 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa Username:docker}
	I1101 01:00:55.367354   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 01:00:55.391069   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1101 01:00:55.413001   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 01:00:55.436904   59148 provision.go:86] duration metric: configureAuth took 457.006108ms
	I1101 01:00:55.436930   59148 buildroot.go:189] setting minikube options for container-runtime
	I1101 01:00:55.437115   59148 config.go:182] Loaded profile config "default-k8s-diff-port-639310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:00:55.437187   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:55.440286   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.440627   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:55.440662   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.440789   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:55.440989   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:55.441187   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:55.441330   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:55.441491   59148 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:55.441905   59148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.97 22 <nil> <nil>}
	I1101 01:00:55.441928   59148 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 01:00:55.788340   59148 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 01:00:55.788372   59148 machine.go:91] provisioned docker machine in 1.096081387s
	I1101 01:00:55.788386   59148 start.go:300] post-start starting for "default-k8s-diff-port-639310" (driver="kvm2")
	I1101 01:00:55.788401   59148 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 01:00:55.788443   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:00:55.788777   59148 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 01:00:55.788846   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:55.792110   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.792594   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:55.792626   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.792829   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:55.793080   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:55.793273   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:55.793421   59148 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa Username:docker}
	I1101 01:00:55.893108   59148 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 01:00:55.898425   59148 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 01:00:55.898452   59148 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/addons for local assets ...
	I1101 01:00:55.898530   59148 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/files for local assets ...
	I1101 01:00:55.898619   59148 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> 145042.pem in /etc/ssl/certs
	I1101 01:00:55.898751   59148 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 01:00:55.909396   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:00:55.943412   59148 start.go:303] post-start completed in 154.998365ms
	I1101 01:00:55.943440   59148 fix.go:56] fixHost completed within 20.309363198s
	I1101 01:00:55.943464   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:55.946417   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.946777   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:55.946810   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.947048   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:55.947268   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:55.947484   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:55.947662   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:55.947849   59148 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:55.948212   59148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.97 22 <nil> <nil>}
	I1101 01:00:55.948225   59148 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1101 01:00:56.081387   59148 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698800456.033536949
	
	I1101 01:00:56.081411   59148 fix.go:206] guest clock: 1698800456.033536949
	I1101 01:00:56.081422   59148 fix.go:219] Guest: 2023-11-01 01:00:56.033536949 +0000 UTC Remote: 2023-11-01 01:00:55.943445038 +0000 UTC m=+270.963710441 (delta=90.091911ms)
	I1101 01:00:56.081446   59148 fix.go:190] guest clock delta is within tolerance: 90.091911ms
	I1101 01:00:56.081451   59148 start.go:83] releasing machines lock for "default-k8s-diff-port-639310", held for 20.447404197s
	I1101 01:00:56.081484   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:00:56.081826   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetIP
	I1101 01:00:56.084827   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:56.085289   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:56.085326   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:56.085543   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:00:56.086049   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:00:56.086272   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:00:56.086374   59148 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 01:00:56.086425   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:56.086677   59148 ssh_runner.go:195] Run: cat /version.json
	I1101 01:00:56.086709   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:56.089377   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:56.089696   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:56.089784   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:56.089841   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:56.090077   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:56.090088   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:56.090108   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:56.090256   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:56.090329   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:56.090405   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:56.090479   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:56.090557   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:56.090613   59148 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa Username:docker}
	I1101 01:00:56.090681   59148 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa Username:docker}
	I1101 01:00:56.220669   59148 ssh_runner.go:195] Run: systemctl --version
	I1101 01:00:56.226971   59148 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 01:00:56.375845   59148 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 01:00:56.383893   59148 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 01:00:56.383986   59148 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 01:00:56.404009   59148 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 01:00:56.404035   59148 start.go:472] detecting cgroup driver to use...
	I1101 01:00:56.404107   59148 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 01:00:56.420015   59148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 01:00:56.435577   59148 docker.go:204] disabling cri-docker service (if available) ...
	I1101 01:00:56.435652   59148 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 01:00:56.448542   59148 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 01:00:56.465197   59148 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 01:00:56.607142   59148 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 01:00:56.739287   59148 docker.go:220] disabling docker service ...
	I1101 01:00:56.739366   59148 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 01:00:56.753861   59148 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 01:00:56.768891   59148 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 01:00:56.893929   59148 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 01:00:57.022891   59148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 01:00:57.039063   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 01:00:57.058893   59148 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 01:00:57.058964   59148 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:57.070769   59148 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 01:00:57.070845   59148 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:57.082528   59148 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:57.094350   59148 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:57.105953   59148 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 01:00:57.117745   59148 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 01:00:57.128493   59148 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 01:00:57.128553   59148 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 01:00:57.145858   59148 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 01:00:57.157318   59148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 01:00:57.288371   59148 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 01:00:57.489356   59148 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 01:00:57.489458   59148 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 01:00:57.495837   59148 start.go:540] Will wait 60s for crictl version
	I1101 01:00:57.495907   59148 ssh_runner.go:195] Run: which crictl
	I1101 01:00:57.500572   59148 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 01:00:57.546076   59148 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1101 01:00:57.546245   59148 ssh_runner.go:195] Run: crio --version
	I1101 01:00:57.601745   59148 ssh_runner.go:195] Run: crio --version
	I1101 01:00:57.664097   59148 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1101 01:00:54.387561   58730 pod_ready.go:102] pod "kube-scheduler-embed-certs-754132" in "kube-system" namespace has status "Ready":"False"
	I1101 01:00:56.388062   58730 pod_ready.go:92] pod "kube-scheduler-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:00:56.388085   58730 pod_ready.go:81] duration metric: took 4.308312567s waiting for pod "kube-scheduler-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:56.388094   58730 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:57.666096   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetIP
	I1101 01:00:57.670028   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:57.670437   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:57.670472   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:57.670760   59148 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1101 01:00:57.675850   59148 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:00:57.689379   59148 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 01:00:57.689439   59148 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:00:57.736333   59148 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1101 01:00:57.736404   59148 ssh_runner.go:195] Run: which lz4
	I1101 01:00:57.740532   59148 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1101 01:00:57.745488   59148 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 01:00:57.745535   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1101 01:00:59.649981   59148 crio.go:444] Took 1.909486 seconds to copy over tarball
	I1101 01:00:59.650070   59148 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 01:00:56.107642   58676 main.go:141] libmachine: (no-preload-008483) Calling .Start
	I1101 01:00:56.107815   58676 main.go:141] libmachine: (no-preload-008483) Ensuring networks are active...
	I1101 01:00:56.108696   58676 main.go:141] libmachine: (no-preload-008483) Ensuring network default is active
	I1101 01:00:56.109190   58676 main.go:141] libmachine: (no-preload-008483) Ensuring network mk-no-preload-008483 is active
	I1101 01:00:56.109623   58676 main.go:141] libmachine: (no-preload-008483) Getting domain xml...
	I1101 01:00:56.110400   58676 main.go:141] libmachine: (no-preload-008483) Creating domain...
	I1101 01:00:57.626479   58676 main.go:141] libmachine: (no-preload-008483) Waiting to get IP...
	I1101 01:00:57.627653   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:00:57.628245   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:00:57.628315   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:00:57.628210   60142 retry.go:31] will retry after 306.868541ms: waiting for machine to come up
	I1101 01:00:57.936854   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:00:57.937358   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:00:57.937392   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:00:57.937309   60142 retry.go:31] will retry after 366.94808ms: waiting for machine to come up
	I1101 01:00:58.306219   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:00:58.306880   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:00:58.306909   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:00:58.306815   60142 retry.go:31] will retry after 470.784378ms: waiting for machine to come up
	I1101 01:00:58.781164   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:00:58.781784   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:00:58.781810   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:00:58.781686   60142 retry.go:31] will retry after 475.883045ms: waiting for machine to come up
	I1101 01:00:59.259400   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:00:59.259922   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:00:59.259964   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:00:59.259816   60142 retry.go:31] will retry after 533.372113ms: waiting for machine to come up
	I1101 01:00:59.794619   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:00:59.795307   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:00:59.795335   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:00:59.795222   60142 retry.go:31] will retry after 643.335947ms: waiting for machine to come up
	I1101 01:01:00.440339   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:00.440876   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:01:00.440901   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:01:00.440795   60142 retry.go:31] will retry after 899.488876ms: waiting for machine to come up
	I1101 01:00:57.545316   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:57.641733   58823 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:00:57.641812   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:57.655826   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:58.173767   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:58.674113   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:59.174394   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:59.674240   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:59.705758   58823 api_server.go:72] duration metric: took 2.064024888s to wait for apiserver process to appear ...
	I1101 01:00:59.705791   58823 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:00:59.705814   58823 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I1101 01:00:58.517913   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:00.993028   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:03.059373   59148 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.409271602s)
	I1101 01:01:03.059403   59148 crio.go:451] Took 3.409395 seconds to extract the tarball
	I1101 01:01:03.059413   59148 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 01:01:03.101818   59148 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:01:03.153263   59148 crio.go:496] all images are preloaded for cri-o runtime.
	I1101 01:01:03.153284   59148 cache_images.go:84] Images are preloaded, skipping loading
	I1101 01:01:03.153341   59148 ssh_runner.go:195] Run: crio config
	I1101 01:01:03.228205   59148 cni.go:84] Creating CNI manager for ""
	I1101 01:01:03.228225   59148 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:01:03.228241   59148 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 01:01:03.228265   59148 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.97 APIServerPort:8444 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-639310 NodeName:default-k8s-diff-port-639310 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 01:01:03.228386   59148 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.97
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-639310"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 01:01:03.228463   59148 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-639310 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-639310 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1101 01:01:03.228517   59148 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 01:01:03.240926   59148 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 01:01:03.241014   59148 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 01:01:03.253440   59148 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I1101 01:01:03.271480   59148 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 01:01:03.292784   59148 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I1101 01:01:03.315295   59148 ssh_runner.go:195] Run: grep 192.168.72.97	control-plane.minikube.internal$ /etc/hosts
	I1101 01:01:03.319922   59148 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:01:03.332820   59148 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310 for IP: 192.168.72.97
	I1101 01:01:03.332869   59148 certs.go:190] acquiring lock for shared ca certs: {Name:mk6185b3938c4b51e80f57b7c81926c81b632b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:01:03.333015   59148 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key
	I1101 01:01:03.333067   59148 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key
	I1101 01:01:03.333174   59148 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/client.key
	I1101 01:01:03.333255   59148 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/apiserver.key.6d6df538
	I1101 01:01:03.333307   59148 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/proxy-client.key
	I1101 01:01:03.333469   59148 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem (1338 bytes)
	W1101 01:01:03.333531   59148 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504_empty.pem, impossibly tiny 0 bytes
	I1101 01:01:03.333542   59148 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 01:01:03.333580   59148 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem (1078 bytes)
	I1101 01:01:03.333632   59148 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem (1123 bytes)
	I1101 01:01:03.333699   59148 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem (1675 bytes)
	I1101 01:01:03.333761   59148 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:01:03.334633   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 01:01:03.361740   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 01:01:03.387535   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 01:01:03.414252   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 01:01:03.438492   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 01:01:03.463501   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 01:01:03.489800   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 01:01:03.517317   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 01:01:03.543330   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem --> /usr/share/ca-certificates/14504.pem (1338 bytes)
	I1101 01:01:03.567744   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /usr/share/ca-certificates/145042.pem (1708 bytes)
	I1101 01:01:03.594230   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 01:01:03.620857   59148 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 01:01:03.638676   59148 ssh_runner.go:195] Run: openssl version
	I1101 01:01:03.644139   59148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14504.pem && ln -fs /usr/share/ca-certificates/14504.pem /etc/ssl/certs/14504.pem"
	I1101 01:01:03.654667   59148 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14504.pem
	I1101 01:01:03.659261   59148 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:54 /usr/share/ca-certificates/14504.pem
	I1101 01:01:03.659322   59148 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14504.pem
	I1101 01:01:03.664592   59148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14504.pem /etc/ssl/certs/51391683.0"
	I1101 01:01:03.675482   59148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145042.pem && ln -fs /usr/share/ca-certificates/145042.pem /etc/ssl/certs/145042.pem"
	I1101 01:01:03.687903   59148 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145042.pem
	I1101 01:01:03.692901   59148 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:54 /usr/share/ca-certificates/145042.pem
	I1101 01:01:03.692970   59148 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145042.pem
	I1101 01:01:03.698691   59148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145042.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 01:01:03.709971   59148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 01:01:03.720612   59148 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:01:03.725306   59148 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:01:03.725397   59148 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:01:03.731004   59148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 01:01:03.743558   59148 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 01:01:03.748428   59148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 01:01:03.754404   59148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 01:01:03.760210   59148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 01:01:03.765964   59148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 01:01:03.771813   59148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 01:01:03.777659   59148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 01:01:03.783754   59148 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-639310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.3 ClusterName:default-k8s-diff-port-639310 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.97 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 01:01:03.783846   59148 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 01:01:03.783903   59148 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:01:03.823390   59148 cri.go:89] found id: ""
	I1101 01:01:03.823473   59148 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 01:01:03.835317   59148 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1101 01:01:03.835339   59148 kubeadm.go:636] restartCluster start
	I1101 01:01:03.835393   59148 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 01:01:03.845532   59148 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:03.846629   59148 kubeconfig.go:92] found "default-k8s-diff-port-639310" server: "https://192.168.72.97:8444"
	I1101 01:01:03.849176   59148 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 01:01:03.859318   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:03.859387   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:03.871598   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:03.871620   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:03.871682   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:03.882903   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:04.383593   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:04.383684   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:04.398424   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:04.883982   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:04.884095   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:04.901344   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:01.341708   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:01.342186   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:01:01.342216   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:01:01.342138   60142 retry.go:31] will retry after 1.416825478s: waiting for machine to come up
	I1101 01:01:02.760851   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:02.761364   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:01:02.761391   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:01:02.761319   60142 retry.go:31] will retry after 1.783291063s: waiting for machine to come up
	I1101 01:01:04.546179   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:04.546731   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:01:04.546768   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:01:04.546684   60142 retry.go:31] will retry after 1.94150512s: waiting for machine to come up
	I1101 01:01:04.706156   58823 api_server.go:269] stopped: https://192.168.39.90:8443/healthz: Get "https://192.168.39.90:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 01:01:04.706226   58823 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I1101 01:01:05.474195   58823 api_server.go:279] https://192.168.39.90:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 01:01:05.474233   58823 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 01:01:05.975031   58823 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I1101 01:01:05.981753   58823 api_server.go:279] https://192.168.39.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1101 01:01:05.981796   58823 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1101 01:01:06.474331   58823 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I1101 01:01:06.483910   58823 api_server.go:279] https://192.168.39.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1101 01:01:06.483971   58823 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1101 01:01:06.974478   58823 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I1101 01:01:06.983225   58823 api_server.go:279] https://192.168.39.90:8443/healthz returned 200:
	ok
	I1101 01:01:06.992078   58823 api_server.go:141] control plane version: v1.16.0
	I1101 01:01:06.992104   58823 api_server.go:131] duration metric: took 7.286307099s to wait for apiserver health ...
	I1101 01:01:06.992112   58823 cni.go:84] Creating CNI manager for ""
	I1101 01:01:06.992118   58823 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:01:06.994180   58823 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:01:06.995961   58823 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:01:07.007478   58823 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:01:07.025029   58823 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:01:07.036645   58823 system_pods.go:59] 7 kube-system pods found
	I1101 01:01:07.036685   58823 system_pods.go:61] "coredns-5644d7b6d9-swhtm" [5c5eacff-9271-46c5-add0-a3931b67876b] Running
	I1101 01:01:07.036692   58823 system_pods.go:61] "etcd-old-k8s-version-330042" [0b703394-0d1c-419d-8e08-c2c299046293] Running
	I1101 01:01:07.036699   58823 system_pods.go:61] "kube-apiserver-old-k8s-version-330042" [0dcb0028-fa22-4107-afa1-fbdd14b615ab] Running
	I1101 01:01:07.036706   58823 system_pods.go:61] "kube-controller-manager-old-k8s-version-330042" [adc1372e-45e1-4365-a039-c06af715cb24] Running
	I1101 01:01:07.036712   58823 system_pods.go:61] "kube-proxy-h86m8" [6db2c8ff-26f9-4f22-9cbd-2405a81d9128] Running
	I1101 01:01:07.036718   58823 system_pods.go:61] "kube-scheduler-old-k8s-version-330042" [f3f78aa9-fcb1-4b87-b7fa-f86c44e801c0] Running
	I1101 01:01:07.036724   58823 system_pods.go:61] "storage-provisioner" [710e45b8-dab7-4bbc-9ce8-f607db4cb63e] Running
	I1101 01:01:07.036733   58823 system_pods.go:74] duration metric: took 11.681153ms to wait for pod list to return data ...
	I1101 01:01:07.036745   58823 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:01:07.043383   58823 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:01:07.043420   58823 node_conditions.go:123] node cpu capacity is 2
	I1101 01:01:07.043433   58823 node_conditions.go:105] duration metric: took 6.681589ms to run NodePressure ...
	I1101 01:01:07.043454   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:07.419893   58823 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1101 01:01:07.425342   58823 retry.go:31] will retry after 365.112122ms: kubelet not initialised
	I1101 01:01:03.491770   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:05.989935   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:05.383225   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:05.383308   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:05.399889   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:05.884036   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:05.884134   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:05.899867   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:06.383118   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:06.383241   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:06.399285   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:06.883379   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:06.883497   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:06.895160   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:07.383835   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:07.383951   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:07.401766   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:07.883254   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:07.883368   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:07.900024   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:08.383405   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:08.383494   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:08.401659   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:08.883099   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:08.883189   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:08.898348   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:09.383858   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:09.384003   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:09.396380   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:09.884003   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:09.884128   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:09.901031   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:06.489565   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:06.490176   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:01:06.490200   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:01:06.490117   60142 retry.go:31] will retry after 2.694877407s: waiting for machine to come up
	I1101 01:01:09.186086   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:09.186554   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:01:09.186584   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:01:09.186497   60142 retry.go:31] will retry after 2.651563817s: waiting for machine to come up
	I1101 01:01:07.799240   58823 retry.go:31] will retry after 519.025086ms: kubelet not initialised
	I1101 01:01:08.325024   58823 retry.go:31] will retry after 345.44325ms: kubelet not initialised
	I1101 01:01:08.674686   58823 retry.go:31] will retry after 665.113314ms: kubelet not initialised
	I1101 01:01:09.345867   58823 retry.go:31] will retry after 1.421023017s: kubelet not initialised
	I1101 01:01:10.773100   58823 retry.go:31] will retry after 1.15707988s: kubelet not initialised
	I1101 01:01:11.936215   58823 retry.go:31] will retry after 3.290674523s: kubelet not initialised
	I1101 01:01:08.490229   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:10.990967   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:12.991285   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:10.383739   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:10.383800   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:10.398972   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:10.882991   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:10.883089   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:10.897346   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:11.383976   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:11.384059   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:11.396332   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:11.883903   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:11.884020   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:11.897279   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:12.383675   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:12.383786   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:12.399623   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:12.883112   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:12.883191   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:12.895484   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:13.383069   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:13.383181   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:13.395417   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:13.860229   59148 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1101 01:01:13.860262   59148 kubeadm.go:1128] stopping kube-system containers ...
	I1101 01:01:13.860277   59148 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 01:01:13.860360   59148 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:01:13.901712   59148 cri.go:89] found id: ""
	I1101 01:01:13.901809   59148 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 01:01:13.918956   59148 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:01:13.931401   59148 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:01:13.931477   59148 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:01:13.943486   59148 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1101 01:01:13.943512   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:14.077324   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:11.839684   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:11.840140   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:01:11.840169   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:01:11.840105   60142 retry.go:31] will retry after 4.157820096s: waiting for machine to come up
	I1101 01:01:15.233157   58823 retry.go:31] will retry after 3.531336164s: kubelet not initialised
	I1101 01:01:15.490358   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:17.491953   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:16.001208   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.001765   58676 main.go:141] libmachine: (no-preload-008483) Found IP for machine: 192.168.50.140
	I1101 01:01:16.001790   58676 main.go:141] libmachine: (no-preload-008483) Reserving static IP address...
	I1101 01:01:16.001806   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has current primary IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.002298   58676 main.go:141] libmachine: (no-preload-008483) Reserved static IP address: 192.168.50.140
	I1101 01:01:16.002338   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "no-preload-008483", mac: "52:54:00:6c:aa:b5", ip: "192.168.50.140"} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.002357   58676 main.go:141] libmachine: (no-preload-008483) Waiting for SSH to be available...
	I1101 01:01:16.002381   58676 main.go:141] libmachine: (no-preload-008483) DBG | skip adding static IP to network mk-no-preload-008483 - found existing host DHCP lease matching {name: "no-preload-008483", mac: "52:54:00:6c:aa:b5", ip: "192.168.50.140"}
	I1101 01:01:16.002397   58676 main.go:141] libmachine: (no-preload-008483) DBG | Getting to WaitForSSH function...
	I1101 01:01:16.004912   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.005349   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.005387   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.005528   58676 main.go:141] libmachine: (no-preload-008483) DBG | Using SSH client type: external
	I1101 01:01:16.005550   58676 main.go:141] libmachine: (no-preload-008483) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa (-rw-------)
	I1101 01:01:16.005589   58676 main.go:141] libmachine: (no-preload-008483) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.140 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1101 01:01:16.005607   58676 main.go:141] libmachine: (no-preload-008483) DBG | About to run SSH command:
	I1101 01:01:16.005621   58676 main.go:141] libmachine: (no-preload-008483) DBG | exit 0
	I1101 01:01:16.100131   58676 main.go:141] libmachine: (no-preload-008483) DBG | SSH cmd err, output: <nil>: 
	I1101 01:01:16.100576   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetConfigRaw
	I1101 01:01:16.101304   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetIP
	I1101 01:01:16.104212   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.104482   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.104528   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.104710   58676 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/config.json ...
	I1101 01:01:16.104933   58676 machine.go:88] provisioning docker machine ...
	I1101 01:01:16.104951   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:01:16.105159   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetMachineName
	I1101 01:01:16.105351   58676 buildroot.go:166] provisioning hostname "no-preload-008483"
	I1101 01:01:16.105375   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetMachineName
	I1101 01:01:16.105551   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:16.107936   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.108287   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.108333   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.108422   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:16.108594   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:16.108734   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:16.108861   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:16.109041   58676 main.go:141] libmachine: Using SSH client type: native
	I1101 01:01:16.109531   58676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I1101 01:01:16.109557   58676 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-008483 && echo "no-preload-008483" | sudo tee /etc/hostname
	I1101 01:01:16.249893   58676 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-008483
	
	I1101 01:01:16.249924   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:16.253130   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.253531   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.253571   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.253879   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:16.254106   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:16.254304   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:16.254441   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:16.254608   58676 main.go:141] libmachine: Using SSH client type: native
	I1101 01:01:16.254965   58676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I1101 01:01:16.254987   58676 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-008483' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-008483/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-008483' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 01:01:16.386797   58676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 01:01:16.386834   58676 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7305/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7305/.minikube}
	I1101 01:01:16.386862   58676 buildroot.go:174] setting up certificates
	I1101 01:01:16.386870   58676 provision.go:83] configureAuth start
	I1101 01:01:16.386879   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetMachineName
	I1101 01:01:16.387149   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetIP
	I1101 01:01:16.390409   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.390812   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.390844   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.391055   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:16.393580   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.394122   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.394154   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.394352   58676 provision.go:138] copyHostCerts
	I1101 01:01:16.394425   58676 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem, removing ...
	I1101 01:01:16.394438   58676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 01:01:16.394506   58676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem (1078 bytes)
	I1101 01:01:16.394646   58676 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem, removing ...
	I1101 01:01:16.394658   58676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 01:01:16.394690   58676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem (1123 bytes)
	I1101 01:01:16.394774   58676 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem, removing ...
	I1101 01:01:16.394786   58676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 01:01:16.394811   58676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem (1675 bytes)
	I1101 01:01:16.394874   58676 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem org=jenkins.no-preload-008483 san=[192.168.50.140 192.168.50.140 localhost 127.0.0.1 minikube no-preload-008483]
	I1101 01:01:16.593958   58676 provision.go:172] copyRemoteCerts
	I1101 01:01:16.594024   58676 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 01:01:16.594046   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:16.597073   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.597449   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.597484   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.597723   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:16.597956   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:16.598108   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:16.598247   58676 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa Username:docker}
	I1101 01:01:16.689574   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 01:01:16.714820   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1101 01:01:16.744383   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 01:01:16.769305   58676 provision.go:86] duration metric: configureAuth took 382.416455ms
	I1101 01:01:16.769338   58676 buildroot.go:189] setting minikube options for container-runtime
	I1101 01:01:16.769596   58676 config.go:182] Loaded profile config "no-preload-008483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:01:16.769692   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:16.773209   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.773565   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.773628   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.773828   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:16.774071   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:16.774353   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:16.774570   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:16.774772   58676 main.go:141] libmachine: Using SSH client type: native
	I1101 01:01:16.775132   58676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I1101 01:01:16.775150   58676 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 01:01:17.110397   58676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 01:01:17.110481   58676 machine.go:91] provisioned docker machine in 1.005532035s
	I1101 01:01:17.110500   58676 start.go:300] post-start starting for "no-preload-008483" (driver="kvm2")
	I1101 01:01:17.110513   58676 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 01:01:17.110559   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:01:17.110920   58676 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 01:01:17.110948   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:17.114342   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.114794   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:17.114829   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.115028   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:17.115227   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:17.115440   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:17.115621   58676 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa Username:docker}
	I1101 01:01:17.210514   58676 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 01:01:17.216393   58676 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 01:01:17.216423   58676 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/addons for local assets ...
	I1101 01:01:17.216522   58676 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/files for local assets ...
	I1101 01:01:17.216640   58676 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> 145042.pem in /etc/ssl/certs
	I1101 01:01:17.216773   58676 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 01:01:17.229604   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:01:17.255095   58676 start.go:303] post-start completed in 144.577436ms
	I1101 01:01:17.255120   58676 fix.go:56] fixHost completed within 21.173509578s
	I1101 01:01:17.255192   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:17.258433   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.258833   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:17.258858   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.259085   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:17.259305   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:17.259478   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:17.259628   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:17.259825   58676 main.go:141] libmachine: Using SSH client type: native
	I1101 01:01:17.260306   58676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I1101 01:01:17.260321   58676 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1101 01:01:17.389718   58676 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698800477.337229135
	
	I1101 01:01:17.389748   58676 fix.go:206] guest clock: 1698800477.337229135
	I1101 01:01:17.389770   58676 fix.go:219] Guest: 2023-11-01 01:01:17.337229135 +0000 UTC Remote: 2023-11-01 01:01:17.255124581 +0000 UTC m=+361.362725964 (delta=82.104554ms)
	I1101 01:01:17.389797   58676 fix.go:190] guest clock delta is within tolerance: 82.104554ms
	I1101 01:01:17.389804   58676 start.go:83] releasing machines lock for "no-preload-008483", held for 21.308227601s
	I1101 01:01:17.389828   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:01:17.390149   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetIP
	I1101 01:01:17.393289   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.393692   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:17.393723   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.393937   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:01:17.394589   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:01:17.394780   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:01:17.394877   58676 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 01:01:17.394918   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:17.395060   58676 ssh_runner.go:195] Run: cat /version.json
	I1101 01:01:17.395115   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:17.398497   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:17.398497   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.398581   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:17.398642   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.398665   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.398700   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:17.398853   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:17.398861   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:17.398881   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.398995   58676 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa Username:docker}
	I1101 01:01:17.399475   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:17.399644   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:17.399798   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:17.399976   58676 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa Username:docker}
	I1101 01:01:17.524462   58676 ssh_runner.go:195] Run: systemctl --version
	I1101 01:01:17.530328   58676 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 01:01:17.678956   58676 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 01:01:17.686754   58676 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 01:01:17.686834   58676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 01:01:17.705358   58676 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 01:01:17.705388   58676 start.go:472] detecting cgroup driver to use...
	I1101 01:01:17.705527   58676 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 01:01:17.722410   58676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 01:01:17.739380   58676 docker.go:204] disabling cri-docker service (if available) ...
	I1101 01:01:17.739443   58676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 01:01:17.755953   58676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 01:01:17.769672   58676 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 01:01:17.900801   58676 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 01:01:18.027283   58676 docker.go:220] disabling docker service ...
	I1101 01:01:18.027378   58676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 01:01:18.041230   58676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 01:01:18.052784   58676 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 01:01:18.165341   58676 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 01:01:18.276403   58676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 01:01:18.289618   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 01:01:18.308480   58676 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 01:01:18.308562   58676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:01:18.318597   58676 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 01:01:18.318673   58676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:01:18.328312   58676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:01:18.340054   58676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:01:18.351854   58676 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 01:01:18.364129   58676 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 01:01:18.372789   58676 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 01:01:18.372879   58676 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 01:01:18.385792   58676 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 01:01:18.394803   58676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 01:01:18.503941   58676 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 01:01:18.687034   58676 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 01:01:18.687137   58676 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 01:01:18.691750   58676 start.go:540] Will wait 60s for crictl version
	I1101 01:01:18.691818   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:18.695752   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 01:01:18.735012   58676 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1101 01:01:18.735098   58676 ssh_runner.go:195] Run: crio --version
	I1101 01:01:18.782835   58676 ssh_runner.go:195] Run: crio --version
	I1101 01:01:18.829727   58676 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1101 01:01:15.054547   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:15.248625   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:15.325492   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:15.396782   59148 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:01:15.396854   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:15.420220   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:15.941271   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:16.441997   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:16.942240   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:17.441850   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:17.941784   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:17.965191   59148 api_server.go:72] duration metric: took 2.5684081s to wait for apiserver process to appear ...
	I1101 01:01:17.965220   59148 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:01:17.965238   59148 api_server.go:253] Checking apiserver healthz at https://192.168.72.97:8444/healthz ...
	I1101 01:01:18.831303   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetIP
	I1101 01:01:18.834574   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:18.834969   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:18.835003   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:18.835233   58676 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1101 01:01:18.839259   58676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:01:18.853665   58676 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 01:01:18.853725   58676 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:01:18.890995   58676 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1101 01:01:18.891024   58676 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.3 registry.k8s.io/kube-controller-manager:v1.28.3 registry.k8s.io/kube-scheduler:v1.28.3 registry.k8s.io/kube-proxy:v1.28.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1101 01:01:18.891130   58676 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1101 01:01:18.891143   58676 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.3
	I1101 01:01:18.891144   58676 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1101 01:01:18.891201   58676 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1101 01:01:18.891263   58676 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.3
	I1101 01:01:18.891397   58676 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.3
	I1101 01:01:18.891415   58676 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1101 01:01:18.891134   58676 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:01:18.892729   58676 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.3
	I1101 01:01:18.892742   58676 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:01:18.892747   58676 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1101 01:01:18.892760   58676 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1101 01:01:18.892760   58676 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1101 01:01:18.892729   58676 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.3
	I1101 01:01:18.892790   58676 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.3
	I1101 01:01:18.892835   58676 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1101 01:01:19.112836   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1101 01:01:19.131170   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.3
	I1101 01:01:19.147328   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.3
	I1101 01:01:19.148513   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I1101 01:01:19.155909   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.3
	I1101 01:01:19.163598   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.3
	I1101 01:01:19.166436   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I1101 01:01:19.290823   58676 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.3" needs transfer: "registry.k8s.io/kube-proxy:v1.28.3" does not exist at hash "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf" in container runtime
	I1101 01:01:19.290888   58676 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.3
	I1101 01:01:19.290943   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:19.331622   58676 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.3" does not exist at hash "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076" in container runtime
	I1101 01:01:19.331709   58676 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.3" does not exist at hash "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4" in container runtime
	I1101 01:01:19.331776   58676 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.3
	I1101 01:01:19.331717   58676 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.3
	I1101 01:01:19.331872   58676 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.3" does not exist at hash "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3" in container runtime
	I1101 01:01:19.331899   58676 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1101 01:01:19.331905   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:19.331645   58676 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I1101 01:01:19.331979   58676 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I1101 01:01:19.331986   58676 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I1101 01:01:19.332011   58676 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I1101 01:01:19.332023   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:19.331945   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:19.332053   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:19.332040   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.3
	I1101 01:01:19.331842   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:19.342099   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.3
	I1101 01:01:19.396521   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I1101 01:01:19.396603   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.3
	I1101 01:01:19.396612   58676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3
	I1101 01:01:19.396628   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.3
	I1101 01:01:19.396681   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I1101 01:01:19.396700   58676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.3
	I1101 01:01:19.396750   58676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3
	I1101 01:01:19.396842   58676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1101 01:01:19.497732   58676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.3 (exists)
	I1101 01:01:19.497756   58676 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.3
	I1101 01:01:19.497784   58676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I1101 01:01:19.497813   58676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3
	I1101 01:01:19.497871   58676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I1101 01:01:19.497924   58676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3
	I1101 01:01:19.497964   58676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.3 (exists)
	I1101 01:01:19.498009   58676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1101 01:01:19.498015   58676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3
	I1101 01:01:19.498054   58676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I1101 01:01:19.498111   58676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I1101 01:01:19.498117   58676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1101 01:01:19.764214   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:01:18.769797   58823 retry.go:31] will retry after 5.956460089s: kubelet not initialised
	I1101 01:01:19.987384   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:21.989585   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:22.277798   59148 api_server.go:279] https://192.168.72.97:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 01:01:22.277829   59148 api_server.go:103] status: https://192.168.72.97:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 01:01:22.277839   59148 api_server.go:253] Checking apiserver healthz at https://192.168.72.97:8444/healthz ...
	I1101 01:01:22.371756   59148 api_server.go:279] https://192.168.72.97:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 01:01:22.371796   59148 api_server.go:103] status: https://192.168.72.97:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 01:01:22.872332   59148 api_server.go:253] Checking apiserver healthz at https://192.168.72.97:8444/healthz ...
	I1101 01:01:22.884543   59148 api_server.go:279] https://192.168.72.97:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:01:22.884587   59148 api_server.go:103] status: https://192.168.72.97:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:01:23.372033   59148 api_server.go:253] Checking apiserver healthz at https://192.168.72.97:8444/healthz ...
	I1101 01:01:23.381608   59148 api_server.go:279] https://192.168.72.97:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:01:23.381657   59148 api_server.go:103] status: https://192.168.72.97:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:01:23.872319   59148 api_server.go:253] Checking apiserver healthz at https://192.168.72.97:8444/healthz ...
	I1101 01:01:23.879515   59148 api_server.go:279] https://192.168.72.97:8444/healthz returned 200:
	ok
	I1101 01:01:23.892376   59148 api_server.go:141] control plane version: v1.28.3
	I1101 01:01:23.892412   59148 api_server.go:131] duration metric: took 5.927178892s to wait for apiserver health ...
	I1101 01:01:23.892424   59148 cni.go:84] Creating CNI manager for ""
	I1101 01:01:23.892433   59148 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:01:23.894577   59148 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:01:23.896163   59148 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:01:23.928482   59148 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:01:23.952485   59148 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:01:23.968054   59148 system_pods.go:59] 8 kube-system pods found
	I1101 01:01:23.968095   59148 system_pods.go:61] "coredns-5dd5756b68-lmxx8" [c74c5ddc-56a8-422c-a140-1fdd14ef817d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 01:01:23.968115   59148 system_pods.go:61] "etcd-default-k8s-diff-port-639310" [1baf2571-f6c6-43bc-8051-e72f7eb4ed70] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 01:01:23.968126   59148 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-639310" [9cbc66c6-7c66-4b24-9400-a5add2edec14] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 01:01:23.968145   59148 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-639310" [99945be6-6fb8-4da6-8c6a-c25a2023d2d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 01:01:23.968158   59148 system_pods.go:61] "kube-proxy-f45wg" [abe74c94-5140-4c35-a141-d995652948e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 01:01:23.968167   59148 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-639310" [299c1962-1945-4525-90c7-384d515dc4e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 01:01:23.968188   59148 system_pods.go:61] "metrics-server-57f55c9bc5-6szl7" [1e00ef03-d5f4-4e8b-a247-8c31a5492f9e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:01:23.968201   59148 system_pods.go:61] "storage-provisioner" [fe2e7631-0564-44d2-afbd-578fb37f6a04] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 01:01:23.968215   59148 system_pods.go:74] duration metric: took 15.694719ms to wait for pod list to return data ...
	I1101 01:01:23.968224   59148 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:01:23.972141   59148 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:01:23.972177   59148 node_conditions.go:123] node cpu capacity is 2
	I1101 01:01:23.972191   59148 node_conditions.go:105] duration metric: took 3.96106ms to run NodePressure ...
	I1101 01:01:23.972214   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:24.253558   59148 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1101 01:01:24.258842   59148 kubeadm.go:787] kubelet initialised
	I1101 01:01:24.258869   59148 kubeadm.go:788] duration metric: took 5.283339ms waiting for restarted kubelet to initialise ...
	I1101 01:01:24.258878   59148 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:01:24.265507   59148 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-lmxx8" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:24.271381   59148 pod_ready.go:97] node "default-k8s-diff-port-639310" hosting pod "coredns-5dd5756b68-lmxx8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.271408   59148 pod_ready.go:81] duration metric: took 5.876802ms waiting for pod "coredns-5dd5756b68-lmxx8" in "kube-system" namespace to be "Ready" ...
	E1101 01:01:24.271418   59148 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-639310" hosting pod "coredns-5dd5756b68-lmxx8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.271426   59148 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:24.277446   59148 pod_ready.go:97] node "default-k8s-diff-port-639310" hosting pod "etcd-default-k8s-diff-port-639310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.277476   59148 pod_ready.go:81] duration metric: took 6.04229ms waiting for pod "etcd-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	E1101 01:01:24.277487   59148 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-639310" hosting pod "etcd-default-k8s-diff-port-639310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.277495   59148 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:24.283557   59148 pod_ready.go:97] node "default-k8s-diff-port-639310" hosting pod "kube-apiserver-default-k8s-diff-port-639310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.283604   59148 pod_ready.go:81] duration metric: took 6.094277ms waiting for pod "kube-apiserver-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	E1101 01:01:24.283617   59148 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-639310" hosting pod "kube-apiserver-default-k8s-diff-port-639310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.283630   59148 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:24.357249   59148 pod_ready.go:97] node "default-k8s-diff-port-639310" hosting pod "kube-controller-manager-default-k8s-diff-port-639310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.357288   59148 pod_ready.go:81] duration metric: took 73.64295ms waiting for pod "kube-controller-manager-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	E1101 01:01:24.357302   59148 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-639310" hosting pod "kube-controller-manager-default-k8s-diff-port-639310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.357319   59148 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f45wg" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:21.457919   58676 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0: (1.960002941s)
	I1101 01:01:21.457955   58676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I1101 01:01:21.458111   58676 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.3: (1.960074529s)
	I1101 01:01:21.458140   58676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.3 (exists)
	I1101 01:01:21.458152   58676 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.3: (1.960014372s)
	I1101 01:01:21.458176   58676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.3 (exists)
	I1101 01:01:21.458226   58676 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1: (1.960094366s)
	I1101 01:01:21.458252   58676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I1101 01:01:21.458267   58676 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.694021872s)
	I1101 01:01:21.458306   58676 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1101 01:01:21.458344   58676 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:01:21.458392   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:21.458644   58676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3: (1.960815533s)
	I1101 01:01:21.458659   58676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3 from cache
	I1101 01:01:21.458686   58676 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1101 01:01:21.458718   58676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1101 01:01:21.462463   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:01:23.757842   58676 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.295346464s)
	I1101 01:01:23.757911   58676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1101 01:01:23.757849   58676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3: (2.299099605s)
	I1101 01:01:23.757965   58676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3 from cache
	I1101 01:01:23.758006   58676 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I1101 01:01:23.758025   58676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1101 01:01:23.758040   58676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I1101 01:01:23.764726   58676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1101 01:01:24.732471   58823 retry.go:31] will retry after 9.584941607s: kubelet not initialised
	I1101 01:01:23.990727   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:26.489463   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:25.156181   59148 pod_ready.go:92] pod "kube-proxy-f45wg" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:25.156211   59148 pod_ready.go:81] duration metric: took 798.883976ms waiting for pod "kube-proxy-f45wg" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:25.156225   59148 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:27.476794   59148 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-639310" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:29.974327   59148 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-639310" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:29.974364   59148 pod_ready.go:81] duration metric: took 4.818128166s waiting for pod "kube-scheduler-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:29.974381   59148 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:28.990433   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:30.991378   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:32.004594   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:34.006695   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:31.399348   58676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (7.641283444s)
	I1101 01:01:31.399378   58676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I1101 01:01:31.399412   58676 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1101 01:01:31.399465   58676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1101 01:01:33.857323   58676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3: (2.45781579s)
	I1101 01:01:33.857356   58676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3 from cache
	I1101 01:01:33.857384   58676 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1101 01:01:33.857444   58676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1101 01:01:34.322788   58823 retry.go:31] will retry after 7.673111332s: kubelet not initialised
	I1101 01:01:33.488934   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:35.489417   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:37.989455   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:36.506432   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:39.004133   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:36.328716   58676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3: (2.471243195s)
	I1101 01:01:36.328755   58676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3 from cache
	I1101 01:01:36.328788   58676 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I1101 01:01:36.328839   58676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I1101 01:01:37.691820   58676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.362944664s)
	I1101 01:01:37.691851   58676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I1101 01:01:37.691877   58676 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1101 01:01:37.691978   58676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1101 01:01:38.442125   58676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1101 01:01:38.442181   58676 cache_images.go:123] Successfully loaded all cached images
	I1101 01:01:38.442188   58676 cache_images.go:92] LoadImages completed in 19.55115042s
	I1101 01:01:38.442260   58676 ssh_runner.go:195] Run: crio config
	I1101 01:01:38.499778   58676 cni.go:84] Creating CNI manager for ""
	I1101 01:01:38.499799   58676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:01:38.499820   58676 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 01:01:38.499846   58676 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.140 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-008483 NodeName:no-preload-008483 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.140"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.140 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 01:01:38.500007   58676 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.140
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-008483"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.140
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.140"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 01:01:38.500076   58676 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-008483 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:no-preload-008483 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 01:01:38.500135   58676 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 01:01:38.510073   58676 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 01:01:38.510160   58676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 01:01:38.517853   58676 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1101 01:01:38.534085   58676 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 01:01:38.549312   58676 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I1101 01:01:38.566320   58676 ssh_runner.go:195] Run: grep 192.168.50.140	control-plane.minikube.internal$ /etc/hosts
	I1101 01:01:38.569923   58676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.140	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:01:38.582147   58676 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483 for IP: 192.168.50.140
	I1101 01:01:38.582180   58676 certs.go:190] acquiring lock for shared ca certs: {Name:mk6185b3938c4b51e80f57b7c81926c81b632b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:01:38.582353   58676 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key
	I1101 01:01:38.582412   58676 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key
	I1101 01:01:38.582512   58676 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/client.key
	I1101 01:01:38.582596   58676 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/apiserver.key.306fa7af
	I1101 01:01:38.582664   58676 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/proxy-client.key
	I1101 01:01:38.582841   58676 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem (1338 bytes)
	W1101 01:01:38.582887   58676 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504_empty.pem, impossibly tiny 0 bytes
	I1101 01:01:38.582903   58676 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 01:01:38.582941   58676 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem (1078 bytes)
	I1101 01:01:38.582978   58676 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem (1123 bytes)
	I1101 01:01:38.583015   58676 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem (1675 bytes)
	I1101 01:01:38.583082   58676 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:01:38.583827   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 01:01:38.607306   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 01:01:38.631666   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 01:01:38.655201   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 01:01:38.678237   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 01:01:38.700410   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 01:01:38.726807   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 01:01:38.752672   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 01:01:38.776285   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 01:01:38.799902   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem --> /usr/share/ca-certificates/14504.pem (1338 bytes)
	I1101 01:01:38.823790   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /usr/share/ca-certificates/145042.pem (1708 bytes)
	I1101 01:01:38.847407   58676 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 01:01:38.863594   58676 ssh_runner.go:195] Run: openssl version
	I1101 01:01:38.869214   58676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14504.pem && ln -fs /usr/share/ca-certificates/14504.pem /etc/ssl/certs/14504.pem"
	I1101 01:01:38.878725   58676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14504.pem
	I1101 01:01:38.883007   58676 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:54 /usr/share/ca-certificates/14504.pem
	I1101 01:01:38.883069   58676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14504.pem
	I1101 01:01:38.888251   58676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14504.pem /etc/ssl/certs/51391683.0"
	I1101 01:01:38.899894   58676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145042.pem && ln -fs /usr/share/ca-certificates/145042.pem /etc/ssl/certs/145042.pem"
	I1101 01:01:38.909658   58676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145042.pem
	I1101 01:01:38.914011   58676 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:54 /usr/share/ca-certificates/145042.pem
	I1101 01:01:38.914088   58676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145042.pem
	I1101 01:01:38.919323   58676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145042.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 01:01:38.928836   58676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 01:01:38.937988   58676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:01:38.943540   58676 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:01:38.943607   58676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:01:38.949543   58676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 01:01:38.959098   58676 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 01:01:38.963149   58676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 01:01:38.968868   58676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 01:01:38.974315   58676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 01:01:38.979746   58676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 01:01:38.985852   58676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 01:01:38.991864   58676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 01:01:38.998153   58676 kubeadm.go:404] StartCluster: {Name:no-preload-008483 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:no-preload-008483 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.140 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 01:01:38.998271   58676 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 01:01:38.998340   58676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:01:39.045797   58676 cri.go:89] found id: ""
	I1101 01:01:39.045870   58676 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 01:01:39.056166   58676 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1101 01:01:39.056197   58676 kubeadm.go:636] restartCluster start
	I1101 01:01:39.056252   58676 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 01:01:39.065191   58676 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:39.066337   58676 kubeconfig.go:92] found "no-preload-008483" server: "https://192.168.50.140:8443"
	I1101 01:01:39.068843   58676 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 01:01:39.077558   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:39.077606   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:39.088105   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:39.088123   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:39.088168   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:39.100203   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:39.600957   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:39.601029   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:39.612652   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:40.101101   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:40.101191   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:40.113249   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:40.600487   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:40.600552   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:40.612183   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:42.002176   58823 kubeadm.go:787] kubelet initialised
	I1101 01:01:42.002198   58823 kubeadm.go:788] duration metric: took 34.582278796s waiting for restarted kubelet to initialise ...
	I1101 01:01:42.002211   58823 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:01:42.007691   58823 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-m8mn8" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.012657   58823 pod_ready.go:92] pod "coredns-5644d7b6d9-m8mn8" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:42.012677   58823 pod_ready.go:81] duration metric: took 4.961011ms waiting for pod "coredns-5644d7b6d9-m8mn8" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.012687   58823 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-swhtm" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.017099   58823 pod_ready.go:92] pod "coredns-5644d7b6d9-swhtm" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:42.017124   58823 pod_ready.go:81] duration metric: took 4.429709ms waiting for pod "coredns-5644d7b6d9-swhtm" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.017137   58823 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.021376   58823 pod_ready.go:92] pod "etcd-old-k8s-version-330042" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:42.021403   58823 pod_ready.go:81] duration metric: took 4.25772ms waiting for pod "etcd-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.021415   58823 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.026057   58823 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-330042" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:42.026080   58823 pod_ready.go:81] duration metric: took 4.65685ms waiting for pod "kube-apiserver-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.026096   58823 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.401057   58823 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-330042" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:42.401085   58823 pod_ready.go:81] duration metric: took 374.980275ms waiting for pod "kube-controller-manager-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.401099   58823 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-h86m8" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:40.487876   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:42.488609   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:41.504485   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:44.005180   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:41.100662   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:41.100773   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:41.113339   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:41.601121   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:41.601195   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:41.613986   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:42.101110   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:42.101188   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:42.113963   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:42.600356   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:42.600458   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:42.612154   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:43.100679   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:43.100767   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:43.113009   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:43.601328   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:43.601402   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:43.612862   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:44.101146   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:44.101261   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:44.113407   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:44.600812   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:44.600955   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:44.613161   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:45.100665   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:45.100769   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:45.112905   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:45.600416   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:45.600515   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:45.612930   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:42.801878   58823 pod_ready.go:92] pod "kube-proxy-h86m8" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:42.801899   58823 pod_ready.go:81] duration metric: took 400.793617ms waiting for pod "kube-proxy-h86m8" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.801907   58823 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:43.201586   58823 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-330042" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:43.201618   58823 pod_ready.go:81] duration metric: took 399.702904ms waiting for pod "kube-scheduler-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:43.201632   58823 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:45.508037   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:44.489092   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:46.493162   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:46.506251   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:49.004539   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:46.100957   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:46.101023   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:46.113645   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:46.600681   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:46.600781   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:46.612564   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:47.101090   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:47.101156   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:47.113500   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:47.601105   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:47.601244   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:47.613091   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:48.100608   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:48.100725   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:48.112995   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:48.600520   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:48.600603   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:48.612240   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:49.077973   58676 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1101 01:01:49.078017   58676 kubeadm.go:1128] stopping kube-system containers ...
	I1101 01:01:49.078031   58676 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 01:01:49.078097   58676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:01:49.117615   58676 cri.go:89] found id: ""
	I1101 01:01:49.117689   58676 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 01:01:49.133583   58676 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:01:49.142851   58676 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:01:49.142922   58676 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:01:49.151952   58676 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1101 01:01:49.151973   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:49.270827   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:50.046638   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:50.252510   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:50.327660   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:50.398419   58676 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:01:50.398511   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:50.415262   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:50.931672   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:47.508466   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:49.509032   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:51.510816   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:48.987561   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:50.989519   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:52.989978   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:51.004704   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:53.006138   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:51.431168   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:51.931127   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:52.431292   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:52.462617   58676 api_server.go:72] duration metric: took 2.064198698s to wait for apiserver process to appear ...
	I1101 01:01:52.462644   58676 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:01:52.462658   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:52.463297   58676 api_server.go:269] stopped: https://192.168.50.140:8443/healthz: Get "https://192.168.50.140:8443/healthz": dial tcp 192.168.50.140:8443: connect: connection refused
	I1101 01:01:52.463360   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:52.463831   58676 api_server.go:269] stopped: https://192.168.50.140:8443/healthz: Get "https://192.168.50.140:8443/healthz": dial tcp 192.168.50.140:8443: connect: connection refused
	I1101 01:01:52.964290   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:54.007720   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:56.012280   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:56.353340   58676 api_server.go:279] https://192.168.50.140:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 01:01:56.353399   58676 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 01:01:56.353416   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:56.404133   58676 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:01:56.404176   58676 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:01:56.464272   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:56.470496   58676 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:01:56.470553   58676 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:01:56.964058   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:56.975831   58676 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:01:56.975877   58676 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:01:57.464038   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:57.472652   58676 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:01:57.472697   58676 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:01:57.964020   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:57.970866   58676 api_server.go:279] https://192.168.50.140:8443/healthz returned 200:
	ok
	I1101 01:01:57.979612   58676 api_server.go:141] control plane version: v1.28.3
	I1101 01:01:57.979641   58676 api_server.go:131] duration metric: took 5.516990946s to wait for apiserver health ...
	I1101 01:01:57.979650   58676 cni.go:84] Creating CNI manager for ""
	I1101 01:01:57.979657   58676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:01:57.981694   58676 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:01:54.990377   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:57.489817   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:55.505767   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:57.505977   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:00.004800   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:57.983198   58676 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:01:58.006916   58676 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:01:58.035969   58676 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:01:58.047783   58676 system_pods.go:59] 8 kube-system pods found
	I1101 01:01:58.047833   58676 system_pods.go:61] "coredns-5dd5756b68-kcjf2" [e5cba8fe-f5c0-48cd-a21b-649caf4405cd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 01:01:58.047848   58676 system_pods.go:61] "etcd-no-preload-008483" [6e8ce64d-5c27-4528-9ecb-4bd1c2ab55c9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 01:01:58.047868   58676 system_pods.go:61] "kube-apiserver-no-preload-008483" [c320b03e-f364-4b38-8f09-5239d66f90e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 01:01:58.047881   58676 system_pods.go:61] "kube-controller-manager-no-preload-008483" [b89beee3-61e6-4efa-926f-43ae6a50e44b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 01:01:58.047893   58676 system_pods.go:61] "kube-proxy-xjfsj" [a7195683-b9ee-440c-82e6-efcd325a35e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 01:01:58.047907   58676 system_pods.go:61] "kube-scheduler-no-preload-008483" [d8c6a1f5-ceca-46af-9a40-22053f5387b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 01:01:58.047920   58676 system_pods.go:61] "metrics-server-57f55c9bc5-49wtw" [b87d5491-9981-48d5-9cf8-34dbd4b24435] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:01:58.047946   58676 system_pods.go:61] "storage-provisioner" [bf9d5910-ae5f-48f9-9358-54b2068c2e2c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 01:01:58.047959   58676 system_pods.go:74] duration metric: took 11.96541ms to wait for pod list to return data ...
	I1101 01:01:58.047971   58676 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:01:58.052170   58676 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:01:58.052205   58676 node_conditions.go:123] node cpu capacity is 2
	I1101 01:01:58.052218   58676 node_conditions.go:105] duration metric: took 4.239786ms to run NodePressure ...
	I1101 01:01:58.052237   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:58.340580   58676 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1101 01:01:58.351480   58676 kubeadm.go:787] kubelet initialised
	I1101 01:01:58.351510   58676 kubeadm.go:788] duration metric: took 10.903426ms waiting for restarted kubelet to initialise ...
	I1101 01:01:58.351520   58676 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:01:58.359099   58676 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-kcjf2" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:00.383123   58676 pod_ready.go:102] pod "coredns-5dd5756b68-kcjf2" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:58.509858   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:01.009429   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:59.988392   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:01.989042   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:02.505009   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:05.004485   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:02.880623   58676 pod_ready.go:102] pod "coredns-5dd5756b68-kcjf2" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:04.878534   58676 pod_ready.go:92] pod "coredns-5dd5756b68-kcjf2" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:04.878556   58676 pod_ready.go:81] duration metric: took 6.519426334s waiting for pod "coredns-5dd5756b68-kcjf2" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:04.878565   58676 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:03.508377   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:05.508570   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:03.990099   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:06.488196   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:07.005182   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:09.505205   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:06.907992   58676 pod_ready.go:102] pod "etcd-no-preload-008483" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:09.400005   58676 pod_ready.go:102] pod "etcd-no-preload-008483" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:09.900354   58676 pod_ready.go:92] pod "etcd-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:09.900379   58676 pod_ready.go:81] duration metric: took 5.021808339s waiting for pod "etcd-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.900394   58676 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.906496   58676 pod_ready.go:92] pod "kube-apiserver-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:09.906520   58676 pod_ready.go:81] duration metric: took 6.117499ms waiting for pod "kube-apiserver-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.906532   58676 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.911830   58676 pod_ready.go:92] pod "kube-controller-manager-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:09.911850   58676 pod_ready.go:81] duration metric: took 5.311751ms waiting for pod "kube-controller-manager-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.911859   58676 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xjfsj" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.916419   58676 pod_ready.go:92] pod "kube-proxy-xjfsj" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:09.916442   58676 pod_ready.go:81] duration metric: took 4.576855ms waiting for pod "kube-proxy-xjfsj" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.916454   58676 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.921501   58676 pod_ready.go:92] pod "kube-scheduler-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:09.921525   58676 pod_ready.go:81] duration metric: took 5.064522ms waiting for pod "kube-scheduler-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.921536   58676 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:07.514883   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:10.008399   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:08.490011   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:10.988504   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:12.989076   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:11.507014   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:13.509053   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:12.205003   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:14.705621   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:12.509113   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:15.009543   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:15.487844   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:17.488178   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:16.003423   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:18.003597   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:20.004472   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:17.205434   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:19.214743   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:17.508997   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:20.008838   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:22.009023   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:19.488902   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:21.988210   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:22.004908   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:24.503394   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:21.704199   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:23.704855   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:25.705319   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:24.508980   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:27.008249   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:23.988985   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:26.489079   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:26.504752   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:28.505579   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:27.709065   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:30.205608   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:29.507299   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:31.509017   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:28.988567   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:31.488567   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:30.507770   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:33.005199   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:32.707783   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:35.206392   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:34.007977   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:36.008250   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:33.988120   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:36.489908   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:35.503482   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:37.504132   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:39.504348   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:37.704511   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:39.705791   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:38.008778   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:40.509040   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:38.987615   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:40.988646   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:42.005253   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:44.008492   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:42.206082   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:44.704875   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:43.009095   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:45.508557   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:43.489792   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:45.987971   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:47.989322   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:46.504096   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:49.004605   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:47.205736   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:49.704264   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:47.510014   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:50.009950   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:50.489334   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:52.987877   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:51.005543   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:53.504243   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:52.205173   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:54.704843   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:52.509247   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:55.009346   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:55.488330   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:57.987845   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:55.504494   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:58.003674   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:00.004598   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:57.205092   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:59.705637   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:57.522422   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:00.007902   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:02.009964   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:59.987956   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:01.989730   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:02.005953   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:04.007095   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:02.205761   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:04.704065   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:04.508531   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:06.512303   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:04.487667   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:06.487854   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:06.503630   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:08.504993   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:06.704568   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:08.705012   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:09.008519   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:11.509450   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:08.488843   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:10.987614   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:12.989824   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:10.505932   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:13.005799   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:11.203683   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:13.204241   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:15.705287   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:14.008244   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:16.009433   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:15.488278   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:17.988683   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:15.503739   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:17.506253   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:20.004613   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:18.204056   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:20.205312   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:18.009706   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:20.508744   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:20.490044   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:22.989002   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:22.504922   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:25.004156   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:22.704711   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:25.205072   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:23.008359   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:25.509196   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:25.487961   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:27.488324   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:27.008179   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:29.504182   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:27.205671   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:29.208402   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:27.509247   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:30.008627   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:29.988286   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:32.487504   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:31.504973   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:34.004168   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:31.704298   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:33.704452   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:32.507959   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:35.008631   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:37.009271   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:34.488458   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:36.488759   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:36.503146   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:38.504444   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:36.204750   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:38.705346   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:39.507406   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:41.509812   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:38.988439   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:41.491496   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:40.505301   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:42.506003   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:45.004872   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:41.204015   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:43.206055   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:45.705597   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:44.008441   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:46.009900   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:43.987813   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:45.988508   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:47.989201   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:47.505799   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:49.506424   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:48.204686   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:50.704155   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:48.511303   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:51.008360   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:50.488123   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:52.488356   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:52.004387   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:54.505016   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:52.705891   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:54.706732   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:53.008988   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:55.507791   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:54.988620   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:56.990186   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:57.005565   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:59.505220   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:57.205342   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:59.215160   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:57.508013   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:59.509883   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:01.510115   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:59.490512   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:01.988008   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:02.004869   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:04.503903   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:01.704963   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:04.204466   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:04.007146   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:06.007815   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:04.488270   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:06.987544   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:06.505818   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:09.006093   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:06.205560   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:08.703961   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:10.705037   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:08.008817   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:10.508585   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:08.988223   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:10.989742   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:12.990669   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:11.503914   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:13.504018   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:13.206290   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:15.704820   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:13.008696   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:15.010312   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:15.487596   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:17.489381   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:15.505665   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:18.004825   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:20.004966   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:18.205022   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:20.703582   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:17.508842   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:20.008489   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:22.008572   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:19.988378   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:22.490000   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:22.005055   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:24.504050   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:22.704263   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:24.704479   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:24.507893   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:27.009371   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:24.988500   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:27.490306   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:26.504850   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:29.003907   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:27.204442   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:29.204906   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:29.508234   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:31.508285   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:29.988549   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:32.490618   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:31.504800   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:33.506025   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:31.704974   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:34.204565   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:33.512784   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:36.009709   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:34.988579   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:37.491535   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:36.011080   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:38.503881   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:36.204772   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:38.205329   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:40.707128   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:38.509404   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:41.009915   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:39.988897   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:42.487751   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:40.504606   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:42.504912   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:44.505101   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:43.205005   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:45.207096   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:43.507714   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:45.508866   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:44.988852   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:47.488268   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:47.004069   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:49.005029   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:47.704762   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:49.705584   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:48.009495   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:50.508392   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:49.488880   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:51.988841   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:51.504680   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:54.010010   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:52.204554   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:54.705101   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:53.008194   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:55.008373   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:57.009351   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:54.489702   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:56.389066   58730 pod_ready.go:81] duration metric: took 4m0.000951404s waiting for pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace to be "Ready" ...
	E1101 01:04:56.389116   58730 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1101 01:04:56.389139   58730 pod_ready.go:38] duration metric: took 4m11.103640013s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:04:56.389173   58730 kubeadm.go:640] restartCluster took 4m34.207263569s
	W1101 01:04:56.389254   58730 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1101 01:04:56.389292   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1101 01:04:56.504421   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:58.505542   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:56.705911   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:58.706099   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:00.706478   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:59.509462   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:02.009472   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:00.509320   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:03.007708   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:03.203884   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:05.204356   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:04.009580   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:06.508160   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:05.505057   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:07.506811   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:10.004080   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:07.205229   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:09.206089   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:08.509319   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:11.009099   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:12.261608   58730 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (15.872291337s)
	I1101 01:05:12.261694   58730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:05:12.275334   58730 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:05:12.284969   58730 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:05:12.295834   58730 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:05:12.295881   58730 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1101 01:05:12.526039   58730 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 01:05:12.005261   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:14.005683   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:11.706864   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:14.204758   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:13.508597   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:16.008784   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:16.506282   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:19.004037   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:16.205361   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:18.704890   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:18.008878   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:20.009861   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:23.201664   58730 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1101 01:05:23.201785   58730 kubeadm.go:322] [preflight] Running pre-flight checks
	I1101 01:05:23.201920   58730 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 01:05:23.202057   58730 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 01:05:23.202178   58730 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 01:05:23.202255   58730 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 01:05:23.204179   58730 out.go:204]   - Generating certificates and keys ...
	I1101 01:05:23.204304   58730 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1101 01:05:23.204384   58730 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1101 01:05:23.204480   58730 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1101 01:05:23.204557   58730 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1101 01:05:23.204639   58730 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1101 01:05:23.204715   58730 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1101 01:05:23.204792   58730 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1101 01:05:23.204884   58730 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1101 01:05:23.205007   58730 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1101 01:05:23.205133   58730 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1101 01:05:23.205195   58730 kubeadm.go:322] [certs] Using the existing "sa" key
	I1101 01:05:23.205273   58730 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 01:05:23.205332   58730 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 01:05:23.205391   58730 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 01:05:23.205461   58730 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 01:05:23.205550   58730 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 01:05:23.205656   58730 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 01:05:23.205734   58730 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 01:05:23.207792   58730 out.go:204]   - Booting up control plane ...
	I1101 01:05:23.207914   58730 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 01:05:23.208028   58730 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 01:05:23.208124   58730 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 01:05:23.208244   58730 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 01:05:23.208322   58730 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 01:05:23.208356   58730 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1101 01:05:23.208496   58730 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 01:05:23.208569   58730 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003034 seconds
	I1101 01:05:23.208662   58730 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 01:05:23.208762   58730 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 01:05:23.208840   58730 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 01:05:23.209055   58730 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-754132 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 01:05:23.209148   58730 kubeadm.go:322] [bootstrap-token] Using token: j0j8ab.rja1mh5j9krst0k4
	I1101 01:05:23.210755   58730 out.go:204]   - Configuring RBAC rules ...
	I1101 01:05:23.210895   58730 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 01:05:23.211001   58730 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 01:05:23.211205   58730 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 01:05:23.211369   58730 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 01:05:23.211509   58730 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 01:05:23.211617   58730 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 01:05:23.211776   58730 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 01:05:23.211851   58730 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1101 01:05:23.211894   58730 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1101 01:05:23.211901   58730 kubeadm.go:322] 
	I1101 01:05:23.211985   58730 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1101 01:05:23.211992   58730 kubeadm.go:322] 
	I1101 01:05:23.212076   58730 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1101 01:05:23.212085   58730 kubeadm.go:322] 
	I1101 01:05:23.212128   58730 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1101 01:05:23.212205   58730 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 01:05:23.212256   58730 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 01:05:23.212263   58730 kubeadm.go:322] 
	I1101 01:05:23.212305   58730 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1101 01:05:23.212314   58730 kubeadm.go:322] 
	I1101 01:05:23.212352   58730 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 01:05:23.212359   58730 kubeadm.go:322] 
	I1101 01:05:23.212400   58730 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1101 01:05:23.212461   58730 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 01:05:23.212568   58730 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 01:05:23.212584   58730 kubeadm.go:322] 
	I1101 01:05:23.212699   58730 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 01:05:23.212787   58730 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1101 01:05:23.212797   58730 kubeadm.go:322] 
	I1101 01:05:23.212862   58730 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token j0j8ab.rja1mh5j9krst0k4 \
	I1101 01:05:23.212943   58730 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 \
	I1101 01:05:23.212962   58730 kubeadm.go:322] 	--control-plane 
	I1101 01:05:23.212968   58730 kubeadm.go:322] 
	I1101 01:05:23.213083   58730 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1101 01:05:23.213093   58730 kubeadm.go:322] 
	I1101 01:05:23.213202   58730 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token j0j8ab.rja1mh5j9krst0k4 \
	I1101 01:05:23.213346   58730 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 
	I1101 01:05:23.213366   58730 cni.go:84] Creating CNI manager for ""
	I1101 01:05:23.213375   58730 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:05:23.215058   58730 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:05:23.216515   58730 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:05:23.251532   58730 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:05:21.007674   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:23.505067   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:21.204745   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:23.206316   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:25.211036   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:22.507158   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:24.508157   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:26.508990   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:23.291112   58730 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 01:05:23.291192   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:23.291224   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9 minikube.k8s.io/name=embed-certs-754132 minikube.k8s.io/updated_at=2023_11_01T01_05_23_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:23.452410   58730 ops.go:34] apiserver oom_adj: -16
	I1101 01:05:23.635798   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:23.754993   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:24.350830   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:24.850468   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:25.350887   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:25.850719   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:26.350946   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:26.850869   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:27.350851   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:27.850856   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:25.507083   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:27.511273   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:29.974545   59148 pod_ready.go:81] duration metric: took 4m0.000148043s waiting for pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace to be "Ready" ...
	E1101 01:05:29.974585   59148 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1101 01:05:29.974607   59148 pod_ready.go:38] duration metric: took 4m5.715718658s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:05:29.974652   59148 kubeadm.go:640] restartCluster took 4m26.139306333s
	W1101 01:05:29.974746   59148 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1101 01:05:29.974779   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1101 01:05:27.704338   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:30.205751   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:29.008649   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:31.009235   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:28.350920   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:28.850670   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:29.350172   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:29.850241   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:30.351225   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:30.851276   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:31.350289   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:31.850999   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:32.350874   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:32.850500   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:32.708147   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:35.205568   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:33.351023   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:33.851109   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:34.351257   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:34.850212   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:35.350277   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:35.850281   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:36.350770   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:36.456508   58730 kubeadm.go:1081] duration metric: took 13.165385995s to wait for elevateKubeSystemPrivileges.
	I1101 01:05:36.456550   58730 kubeadm.go:406] StartCluster complete in 5m14.31984828s
	I1101 01:05:36.456575   58730 settings.go:142] acquiring lock: {Name:mk7f269e64dfd8d176737f993e01f6e6badafbc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:05:36.456674   58730 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 01:05:36.458488   58730 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/kubeconfig: {Name:mk08da65b6c71084e1cfafb19800038e8c8303e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:05:36.458789   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 01:05:36.458936   58730 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1101 01:05:36.459029   58730 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-754132"
	I1101 01:05:36.459061   58730 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-754132"
	W1101 01:05:36.459076   58730 addons.go:240] addon storage-provisioner should already be in state true
	I1101 01:05:36.459086   58730 addons.go:69] Setting metrics-server=true in profile "embed-certs-754132"
	I1101 01:05:36.459124   58730 addons.go:231] Setting addon metrics-server=true in "embed-certs-754132"
	I1101 01:05:36.459134   58730 host.go:66] Checking if "embed-certs-754132" exists ...
	I1101 01:05:36.459060   58730 config.go:182] Loaded profile config "embed-certs-754132": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:05:36.459062   58730 addons.go:69] Setting default-storageclass=true in profile "embed-certs-754132"
	I1101 01:05:36.459219   58730 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-754132"
	W1101 01:05:36.459138   58730 addons.go:240] addon metrics-server should already be in state true
	I1101 01:05:36.459347   58730 host.go:66] Checking if "embed-certs-754132" exists ...
	I1101 01:05:36.459588   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:05:36.459633   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:05:36.459638   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:05:36.459674   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:05:36.459689   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:05:36.459713   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:05:36.477136   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40825
	I1101 01:05:36.477207   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37895
	I1101 01:05:36.477706   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46261
	I1101 01:05:36.477874   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:05:36.477889   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:05:36.478086   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:05:36.478388   58730 main.go:141] libmachine: Using API Version  1
	I1101 01:05:36.478405   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:05:36.478540   58730 main.go:141] libmachine: Using API Version  1
	I1101 01:05:36.478561   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:05:36.478601   58730 main.go:141] libmachine: Using API Version  1
	I1101 01:05:36.478622   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:05:36.478794   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:05:36.478990   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:05:36.479037   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:05:36.479219   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetState
	I1101 01:05:36.479379   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:05:36.479412   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:05:36.479587   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:05:36.479623   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:05:36.483272   58730 addons.go:231] Setting addon default-storageclass=true in "embed-certs-754132"
	W1101 01:05:36.483295   58730 addons.go:240] addon default-storageclass should already be in state true
	I1101 01:05:36.483318   58730 host.go:66] Checking if "embed-certs-754132" exists ...
	I1101 01:05:36.483665   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:05:36.483696   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:05:36.498137   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46727
	I1101 01:05:36.498148   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37157
	I1101 01:05:36.498530   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:05:36.499000   58730 main.go:141] libmachine: Using API Version  1
	I1101 01:05:36.499024   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:05:36.499329   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:05:36.499499   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetState
	I1101 01:05:36.501223   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:05:36.503752   58730 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:05:36.505580   58730 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:05:36.505600   58730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 01:05:36.505617   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:05:36.505756   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37761
	I1101 01:05:36.506307   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:05:36.506765   58730 main.go:141] libmachine: Using API Version  1
	I1101 01:05:36.506783   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:05:36.507257   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:05:36.507303   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:05:36.507766   58730 main.go:141] libmachine: Using API Version  1
	I1101 01:05:36.507786   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:05:36.507852   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:05:36.507894   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:05:36.508136   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:05:36.508296   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetState
	I1101 01:05:36.509982   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:05:36.512303   58730 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1101 01:05:36.512065   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:05:36.513712   58730 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 01:05:36.513728   58730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 01:05:36.513749   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:05:36.512082   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:05:36.513819   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:05:36.513839   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:05:36.516632   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:05:36.516867   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:05:36.517052   58730 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa Username:docker}
	I1101 01:05:36.517489   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:05:36.518036   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:05:36.518058   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:05:36.518360   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:05:36.519431   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:05:36.519602   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:05:36.519742   58730 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa Username:docker}
	I1101 01:05:36.526881   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35481
	I1101 01:05:36.527462   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:05:36.527889   58730 main.go:141] libmachine: Using API Version  1
	I1101 01:05:36.527902   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:05:36.528341   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:05:36.528511   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetState
	I1101 01:05:36.530250   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:05:36.530539   58730 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 01:05:36.530557   58730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 01:05:36.530575   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:05:36.533671   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:05:36.534068   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:05:36.534093   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:05:36.534368   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:05:36.534596   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:05:36.534741   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:05:36.534821   58730 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa Username:docker}
	I1101 01:05:36.559098   58730 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-754132" context rescaled to 1 replicas
	I1101 01:05:36.559135   58730 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.83 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 01:05:36.561061   58730 out.go:177] * Verifying Kubernetes components...
	I1101 01:05:33.009726   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:35.507972   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:36.562382   58730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:05:36.684098   58730 node_ready.go:35] waiting up to 6m0s for node "embed-certs-754132" to be "Ready" ...
	I1101 01:05:36.684219   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 01:05:36.689836   58730 node_ready.go:49] node "embed-certs-754132" has status "Ready":"True"
	I1101 01:05:36.689863   58730 node_ready.go:38] duration metric: took 5.731179ms waiting for node "embed-certs-754132" to be "Ready" ...
	I1101 01:05:36.689875   58730 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:05:36.707509   58730 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:05:36.743671   58730 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 01:05:36.743702   58730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1101 01:05:36.764886   58730 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:36.773895   58730 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 01:05:36.810064   58730 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 01:05:36.810095   58730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 01:05:36.888833   58730 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:05:36.888854   58730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 01:05:36.892836   58730 pod_ready.go:92] pod "etcd-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:05:36.892864   58730 pod_ready.go:81] duration metric: took 127.938482ms waiting for pod "etcd-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:36.892879   58730 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:36.968554   58730 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:05:36.978210   58730 pod_ready.go:92] pod "kube-apiserver-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:05:36.978239   58730 pod_ready.go:81] duration metric: took 85.351942ms waiting for pod "kube-apiserver-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:36.978254   58730 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:37.154956   58730 pod_ready.go:92] pod "kube-controller-manager-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:05:37.154983   58730 pod_ready.go:81] duration metric: took 176.720364ms waiting for pod "kube-controller-manager-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:37.154997   58730 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cwbfz" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:38.405267   58730 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.720993157s)
	I1101 01:05:38.405302   58730 start.go:926] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1101 01:05:38.840834   58730 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.133283925s)
	I1101 01:05:38.840891   58730 main.go:141] libmachine: Making call to close driver server
	I1101 01:05:38.840906   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Close
	I1101 01:05:38.840918   58730 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.066970508s)
	I1101 01:05:38.841048   58730 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.872463156s)
	I1101 01:05:38.841085   58730 main.go:141] libmachine: Making call to close driver server
	I1101 01:05:38.841098   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Close
	I1101 01:05:38.841320   58730 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:05:38.841370   58730 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:05:38.841373   58730 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:05:38.841328   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Closing plugin on server side
	I1101 01:05:38.841400   58730 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:05:38.841412   58730 main.go:141] libmachine: Making call to close driver server
	I1101 01:05:38.841426   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Close
	I1101 01:05:38.841390   58730 main.go:141] libmachine: Making call to close driver server
	I1101 01:05:38.841442   58730 main.go:141] libmachine: Making call to close driver server
	I1101 01:05:38.841454   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Close
	I1101 01:05:38.841457   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Close
	I1101 01:05:38.841354   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Closing plugin on server side
	I1101 01:05:38.844717   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Closing plugin on server side
	I1101 01:05:38.844730   58730 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:05:38.844723   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Closing plugin on server side
	I1101 01:05:38.844744   58730 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:05:38.844753   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Closing plugin on server side
	I1101 01:05:38.844757   58730 addons.go:467] Verifying addon metrics-server=true in "embed-certs-754132"
	I1101 01:05:38.844763   58730 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:05:38.844774   58730 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:05:38.844773   58730 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:05:38.844789   58730 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:05:38.844799   58730 main.go:141] libmachine: Making call to close driver server
	I1101 01:05:38.844808   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Close
	I1101 01:05:38.845059   58730 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:05:38.845077   58730 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:05:38.845092   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Closing plugin on server side
	I1101 01:05:38.890752   58730 main.go:141] libmachine: Making call to close driver server
	I1101 01:05:38.890785   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Close
	I1101 01:05:38.891075   58730 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:05:38.891095   58730 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:05:38.891108   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Closing plugin on server side
	I1101 01:05:38.892878   58730 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I1101 01:05:37.706877   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:39.707206   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:38.894405   58730 addons.go:502] enable addons completed in 2.435477984s: enabled=[metrics-server storage-provisioner default-storageclass]
	I1101 01:05:39.279100   58730 pod_ready.go:102] pod "kube-proxy-cwbfz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:40.775597   58730 pod_ready.go:92] pod "kube-proxy-cwbfz" in "kube-system" namespace has status "Ready":"True"
	I1101 01:05:40.775622   58730 pod_ready.go:81] duration metric: took 3.620618998s waiting for pod "kube-proxy-cwbfz" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:40.775644   58730 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:40.782773   58730 pod_ready.go:92] pod "kube-scheduler-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:05:40.782796   58730 pod_ready.go:81] duration metric: took 7.145643ms waiting for pod "kube-scheduler-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:40.782806   58730 pod_ready.go:38] duration metric: took 4.092919772s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:05:40.782821   58730 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:05:40.782868   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:05:40.811977   58730 api_server.go:72] duration metric: took 4.252812827s to wait for apiserver process to appear ...
	I1101 01:05:40.812000   58730 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:05:40.812017   58730 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8443/healthz ...
	I1101 01:05:40.817524   58730 api_server.go:279] https://192.168.61.83:8443/healthz returned 200:
	ok
	I1101 01:05:40.819599   58730 api_server.go:141] control plane version: v1.28.3
	I1101 01:05:40.819625   58730 api_server.go:131] duration metric: took 7.617418ms to wait for apiserver health ...
	I1101 01:05:40.819636   58730 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:05:40.826677   58730 system_pods.go:59] 8 kube-system pods found
	I1101 01:05:40.826714   58730 system_pods.go:61] "coredns-5dd5756b68-6kqbc" [e03e6370-35d1-4438-8b18-d62b0a253ea6] Running
	I1101 01:05:40.826722   58730 system_pods.go:61] "etcd-embed-certs-754132" [2cd8789c-8ba8-47ea-82f2-e461cbc9d3b3] Running
	I1101 01:05:40.826729   58730 system_pods.go:61] "kube-apiserver-embed-certs-754132" [81bd13a3-37ea-4bf6-9eb9-e66318137a21] Running
	I1101 01:05:40.826735   58730 system_pods.go:61] "kube-controller-manager-embed-certs-754132" [6aa18435-1990-479b-b975-7ac1d794d967] Running
	I1101 01:05:40.826742   58730 system_pods.go:61] "kube-proxy-cwbfz" [b7f5ba1e-bd63-456b-94cc-0e2c121b7792] Running
	I1101 01:05:40.826748   58730 system_pods.go:61] "kube-scheduler-embed-certs-754132" [64203f31-7c41-42d0-9d6b-bc63e1b423cc] Running
	I1101 01:05:40.826758   58730 system_pods.go:61] "metrics-server-57f55c9bc5-499xs" [617aecda-f132-4358-9da9-bbc4fc625da0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:05:40.826773   58730 system_pods.go:61] "storage-provisioner" [7feb8931-83d0-4968-a295-a4202e8fc8c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 01:05:40.826786   58730 system_pods.go:74] duration metric: took 7.142747ms to wait for pod list to return data ...
	I1101 01:05:40.826799   58730 default_sa.go:34] waiting for default service account to be created ...
	I1101 01:05:40.831268   58730 default_sa.go:45] found service account: "default"
	I1101 01:05:40.831295   58730 default_sa.go:55] duration metric: took 4.485602ms for default service account to be created ...
	I1101 01:05:40.831309   58730 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 01:05:40.891306   58730 system_pods.go:86] 8 kube-system pods found
	I1101 01:05:40.891335   58730 system_pods.go:89] "coredns-5dd5756b68-6kqbc" [e03e6370-35d1-4438-8b18-d62b0a253ea6] Running
	I1101 01:05:40.891341   58730 system_pods.go:89] "etcd-embed-certs-754132" [2cd8789c-8ba8-47ea-82f2-e461cbc9d3b3] Running
	I1101 01:05:40.891346   58730 system_pods.go:89] "kube-apiserver-embed-certs-754132" [81bd13a3-37ea-4bf6-9eb9-e66318137a21] Running
	I1101 01:05:40.891350   58730 system_pods.go:89] "kube-controller-manager-embed-certs-754132" [6aa18435-1990-479b-b975-7ac1d794d967] Running
	I1101 01:05:40.891354   58730 system_pods.go:89] "kube-proxy-cwbfz" [b7f5ba1e-bd63-456b-94cc-0e2c121b7792] Running
	I1101 01:05:40.891358   58730 system_pods.go:89] "kube-scheduler-embed-certs-754132" [64203f31-7c41-42d0-9d6b-bc63e1b423cc] Running
	I1101 01:05:40.891366   58730 system_pods.go:89] "metrics-server-57f55c9bc5-499xs" [617aecda-f132-4358-9da9-bbc4fc625da0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:05:40.891373   58730 system_pods.go:89] "storage-provisioner" [7feb8931-83d0-4968-a295-a4202e8fc8c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 01:05:40.891381   58730 system_pods.go:126] duration metric: took 60.065984ms to wait for k8s-apps to be running ...
	I1101 01:05:40.891391   58730 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 01:05:40.891436   58730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:05:40.906845   58730 system_svc.go:56] duration metric: took 15.443235ms WaitForService to wait for kubelet.
	I1101 01:05:40.906875   58730 kubeadm.go:581] duration metric: took 4.347718478s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1101 01:05:40.906895   58730 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:05:41.089628   58730 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:05:41.089654   58730 node_conditions.go:123] node cpu capacity is 2
	I1101 01:05:41.089664   58730 node_conditions.go:105] duration metric: took 182.764311ms to run NodePressure ...
	I1101 01:05:41.089674   58730 start.go:228] waiting for startup goroutines ...
	I1101 01:05:41.089680   58730 start.go:233] waiting for cluster config update ...
	I1101 01:05:41.089693   58730 start.go:242] writing updated cluster config ...
	I1101 01:05:41.089950   58730 ssh_runner.go:195] Run: rm -f paused
	I1101 01:05:41.140594   58730 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1101 01:05:41.143142   58730 out.go:177] * Done! kubectl is now configured to use "embed-certs-754132" cluster and "default" namespace by default
	I1101 01:05:37.516552   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:40.009373   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:43.882201   59148 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.907397495s)
	I1101 01:05:43.882275   59148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:05:43.897793   59148 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:05:43.908350   59148 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:05:43.919013   59148 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:05:43.919066   59148 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1101 01:05:43.992534   59148 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1101 01:05:43.992653   59148 kubeadm.go:322] [preflight] Running pre-flight checks
	I1101 01:05:44.162750   59148 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 01:05:44.162906   59148 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 01:05:44.163052   59148 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 01:05:44.398016   59148 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 01:05:44.399998   59148 out.go:204]   - Generating certificates and keys ...
	I1101 01:05:44.400102   59148 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1101 01:05:44.400189   59148 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1101 01:05:44.400334   59148 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1101 01:05:44.400431   59148 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1101 01:05:44.400526   59148 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1101 01:05:44.400602   59148 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1101 01:05:44.400736   59148 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1101 01:05:44.400821   59148 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1101 01:05:44.401336   59148 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1101 01:05:44.401936   59148 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1101 01:05:44.402420   59148 kubeadm.go:322] [certs] Using the existing "sa" key
	I1101 01:05:44.402515   59148 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 01:05:44.470807   59148 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 01:05:44.642677   59148 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 01:05:44.768991   59148 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 01:05:45.052817   59148 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 01:05:45.053698   59148 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 01:05:45.056339   59148 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 01:05:42.204108   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:44.205679   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:42.508073   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:43.201762   58823 pod_ready.go:81] duration metric: took 4m0.000100455s waiting for pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace to be "Ready" ...
	E1101 01:05:43.201795   58823 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1101 01:05:43.201816   58823 pod_ready.go:38] duration metric: took 4m1.199592624s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:05:43.201848   58823 kubeadm.go:640] restartCluster took 4m57.555406731s
	W1101 01:05:43.201899   58823 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1101 01:05:43.201920   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1101 01:05:45.058304   59148 out.go:204]   - Booting up control plane ...
	I1101 01:05:45.058434   59148 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 01:05:45.058565   59148 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 01:05:45.060937   59148 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 01:05:45.078776   59148 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 01:05:45.079692   59148 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 01:05:45.079771   59148 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1101 01:05:45.204880   59148 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 01:05:46.208575   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:48.705698   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:50.708163   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:48.240337   58823 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.038387523s)
	I1101 01:05:48.240417   58823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:05:48.257585   58823 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:05:48.266949   58823 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:05:48.277302   58823 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:05:48.277346   58823 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1101 01:05:48.514394   58823 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 01:05:54.708746   59148 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.503354 seconds
	I1101 01:05:54.708894   59148 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 01:05:54.726194   59148 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 01:05:55.266392   59148 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 01:05:55.266670   59148 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-639310 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 01:05:55.783906   59148 kubeadm.go:322] [bootstrap-token] Using token: ilpx6n.m6vs8mqxrjuf2w8f
	I1101 01:05:53.205312   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:55.206016   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:55.786231   59148 out.go:204]   - Configuring RBAC rules ...
	I1101 01:05:55.786370   59148 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 01:05:55.793682   59148 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 01:05:55.812319   59148 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 01:05:55.819324   59148 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 01:05:55.825785   59148 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 01:05:55.831793   59148 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 01:05:55.858443   59148 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 01:05:56.195472   59148 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1101 01:05:56.248405   59148 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1101 01:05:56.249655   59148 kubeadm.go:322] 
	I1101 01:05:56.249745   59148 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1101 01:05:56.249759   59148 kubeadm.go:322] 
	I1101 01:05:56.249852   59148 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1101 01:05:56.249869   59148 kubeadm.go:322] 
	I1101 01:05:56.249931   59148 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1101 01:05:56.249992   59148 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 01:05:56.250076   59148 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 01:05:56.250088   59148 kubeadm.go:322] 
	I1101 01:05:56.250163   59148 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1101 01:05:56.250172   59148 kubeadm.go:322] 
	I1101 01:05:56.250261   59148 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 01:05:56.250281   59148 kubeadm.go:322] 
	I1101 01:05:56.250344   59148 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1101 01:05:56.250436   59148 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 01:05:56.250560   59148 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 01:05:56.250574   59148 kubeadm.go:322] 
	I1101 01:05:56.250663   59148 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 01:05:56.250757   59148 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1101 01:05:56.250769   59148 kubeadm.go:322] 
	I1101 01:05:56.250881   59148 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token ilpx6n.m6vs8mqxrjuf2w8f \
	I1101 01:05:56.251011   59148 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 \
	I1101 01:05:56.251041   59148 kubeadm.go:322] 	--control-plane 
	I1101 01:05:56.251053   59148 kubeadm.go:322] 
	I1101 01:05:56.251150   59148 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1101 01:05:56.251162   59148 kubeadm.go:322] 
	I1101 01:05:56.251259   59148 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token ilpx6n.m6vs8mqxrjuf2w8f \
	I1101 01:05:56.251383   59148 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 
	I1101 01:05:56.251922   59148 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 01:05:56.251982   59148 cni.go:84] Creating CNI manager for ""
	I1101 01:05:56.252008   59148 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:05:56.254247   59148 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:05:56.256068   59148 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:05:56.281994   59148 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:05:56.324660   59148 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 01:05:56.324796   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:56.324863   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9 minikube.k8s.io/name=default-k8s-diff-port-639310 minikube.k8s.io/updated_at=2023_11_01T01_05_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:56.739064   59148 ops.go:34] apiserver oom_adj: -16
	I1101 01:05:56.739245   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:56.834852   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:57.432044   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:57.931920   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:58.432414   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:58.932871   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:59.432755   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:59.932515   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:57.704234   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:59.705516   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:01.231970   58823 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1101 01:06:01.232062   58823 kubeadm.go:322] [preflight] Running pre-flight checks
	I1101 01:06:01.232156   58823 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 01:06:01.232289   58823 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 01:06:01.232419   58823 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 01:06:01.232595   58823 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 01:06:01.232714   58823 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 01:06:01.232790   58823 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1101 01:06:01.232890   58823 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 01:06:01.235429   58823 out.go:204]   - Generating certificates and keys ...
	I1101 01:06:01.235533   58823 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1101 01:06:01.235606   58823 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1101 01:06:01.235675   58823 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1101 01:06:01.235782   58823 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1101 01:06:01.235889   58823 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1101 01:06:01.235973   58823 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1101 01:06:01.236065   58823 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1101 01:06:01.236153   58823 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1101 01:06:01.236263   58823 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1101 01:06:01.236383   58823 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1101 01:06:01.236447   58823 kubeadm.go:322] [certs] Using the existing "sa" key
	I1101 01:06:01.236528   58823 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 01:06:01.236607   58823 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 01:06:01.236728   58823 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 01:06:01.236811   58823 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 01:06:01.236877   58823 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 01:06:01.236955   58823 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 01:06:01.238699   58823 out.go:204]   - Booting up control plane ...
	I1101 01:06:01.238808   58823 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 01:06:01.238904   58823 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 01:06:01.238990   58823 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 01:06:01.239092   58823 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 01:06:01.239289   58823 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 01:06:01.239387   58823 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.004023 seconds
	I1101 01:06:01.239528   58823 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 01:06:01.239741   58823 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 01:06:01.239796   58823 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 01:06:01.239971   58823 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-330042 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1101 01:06:01.240056   58823 kubeadm.go:322] [bootstrap-token] Using token: lseik6.3ozwuciianl7vrri
	I1101 01:06:01.241690   58823 out.go:204]   - Configuring RBAC rules ...
	I1101 01:06:01.241825   58823 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 01:06:01.242015   58823 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 01:06:01.242170   58823 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 01:06:01.242265   58823 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 01:06:01.242380   58823 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 01:06:01.242448   58823 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1101 01:06:01.242517   58823 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1101 01:06:01.242549   58823 kubeadm.go:322] 
	I1101 01:06:01.242631   58823 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1101 01:06:01.242646   58823 kubeadm.go:322] 
	I1101 01:06:01.242753   58823 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1101 01:06:01.242764   58823 kubeadm.go:322] 
	I1101 01:06:01.242801   58823 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1101 01:06:01.242883   58823 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 01:06:01.242956   58823 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 01:06:01.242965   58823 kubeadm.go:322] 
	I1101 01:06:01.243041   58823 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1101 01:06:01.243152   58823 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 01:06:01.243249   58823 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 01:06:01.243261   58823 kubeadm.go:322] 
	I1101 01:06:01.243357   58823 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1101 01:06:01.243421   58823 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1101 01:06:01.243425   58823 kubeadm.go:322] 
	I1101 01:06:01.243490   58823 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token lseik6.3ozwuciianl7vrri \
	I1101 01:06:01.243597   58823 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 \
	I1101 01:06:01.243619   58823 kubeadm.go:322]     --control-plane 	  
	I1101 01:06:01.243623   58823 kubeadm.go:322] 
	I1101 01:06:01.243697   58823 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1101 01:06:01.243702   58823 kubeadm.go:322] 
	I1101 01:06:01.243773   58823 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token lseik6.3ozwuciianl7vrri \
	I1101 01:06:01.243923   58823 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 
	I1101 01:06:01.243967   58823 cni.go:84] Creating CNI manager for ""
	I1101 01:06:01.243979   58823 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:06:01.246766   58823 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:06:01.248244   58823 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:06:01.274713   58823 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:06:01.299087   58823 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 01:06:01.299184   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:01.299241   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9 minikube.k8s.io/name=old-k8s-version-330042 minikube.k8s.io/updated_at=2023_11_01T01_06_01_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:01.350480   58823 ops.go:34] apiserver oom_adj: -16
	I1101 01:06:01.668212   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:01.795923   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:02.398955   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:00.432038   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:00.932486   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:01.431924   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:01.932050   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:02.432828   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:02.932070   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:03.432833   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:03.931826   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:04.432522   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:04.932660   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:01.705717   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:04.205431   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:02.899285   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:03.398507   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:03.898445   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:04.399301   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:04.898647   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:05.399211   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:05.899099   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:06.398426   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:06.898703   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:07.399266   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:05.431880   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:05.932001   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:06.432804   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:06.932744   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:07.432405   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:07.932531   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:08.432007   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:08.560694   59148 kubeadm.go:1081] duration metric: took 12.235943971s to wait for elevateKubeSystemPrivileges.
	I1101 01:06:08.560733   59148 kubeadm.go:406] StartCluster complete in 5m4.77698433s
	I1101 01:06:08.560756   59148 settings.go:142] acquiring lock: {Name:mk7f269e64dfd8d176737f993e01f6e6badafbc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:06:08.560862   59148 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 01:06:08.563346   59148 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/kubeconfig: {Name:mk08da65b6c71084e1cfafb19800038e8c8303e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:06:08.563655   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 01:06:08.563793   59148 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1101 01:06:08.563857   59148 config.go:182] Loaded profile config "default-k8s-diff-port-639310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:06:08.563874   59148 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-639310"
	I1101 01:06:08.563892   59148 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-639310"
	I1101 01:06:08.563905   59148 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-639310"
	I1101 01:06:08.563917   59148 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-639310"
	I1101 01:06:08.563950   59148 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-639310"
	I1101 01:06:08.563899   59148 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-639310"
	W1101 01:06:08.563962   59148 addons.go:240] addon metrics-server should already be in state true
	W1101 01:06:08.563990   59148 addons.go:240] addon storage-provisioner should already be in state true
	I1101 01:06:08.564025   59148 host.go:66] Checking if "default-k8s-diff-port-639310" exists ...
	I1101 01:06:08.564064   59148 host.go:66] Checking if "default-k8s-diff-port-639310" exists ...
	I1101 01:06:08.564369   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:08.564404   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:08.564421   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:08.564453   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:08.564455   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:08.564488   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:08.581714   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37509
	I1101 01:06:08.582180   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:08.583081   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35137
	I1101 01:06:08.583312   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:06:08.583332   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:08.583553   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41541
	I1101 01:06:08.583702   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:08.583714   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:08.583891   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:08.584174   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:06:08.584200   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:08.584272   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:08.584302   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:08.584638   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:06:08.584687   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:08.584737   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:08.584993   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:08.585152   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetState
	I1101 01:06:08.585215   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:08.585256   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:08.588703   59148 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-639310"
	W1101 01:06:08.588728   59148 addons.go:240] addon default-storageclass should already be in state true
	I1101 01:06:08.588754   59148 host.go:66] Checking if "default-k8s-diff-port-639310" exists ...
	I1101 01:06:08.589158   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:08.589193   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:08.600826   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40787
	I1101 01:06:08.601314   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:08.601952   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:06:08.601976   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:08.602335   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:08.602560   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetState
	I1101 01:06:08.603276   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35887
	I1101 01:06:08.603415   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36765
	I1101 01:06:08.603803   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:08.604098   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:08.604276   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:06:08.604290   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:08.604490   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:06:08.604506   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:08.604573   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:06:08.604778   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:08.606338   59148 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:06:08.605001   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:08.605380   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:08.607632   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:08.607705   59148 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:06:08.607717   59148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 01:06:08.607731   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:06:08.607995   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetState
	I1101 01:06:08.610424   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:06:08.612025   59148 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1101 01:06:08.613346   59148 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 01:06:08.613365   59148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 01:06:08.613386   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:06:08.611304   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:06:08.611864   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:06:08.613461   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:06:08.613508   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:06:08.613650   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:06:08.613769   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:06:08.613869   59148 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa Username:docker}
	I1101 01:06:08.618717   59148 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-639310" context rescaled to 1 replicas
	I1101 01:06:08.618755   59148 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.97 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 01:06:08.620291   59148 out.go:177] * Verifying Kubernetes components...
	I1101 01:06:08.618896   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:06:08.620048   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:06:08.621662   59148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:06:08.621747   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:06:08.621777   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:06:08.622129   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:06:08.622359   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:06:08.622526   59148 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa Username:docker}
	I1101 01:06:08.629241   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42169
	I1101 01:06:08.629773   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:08.630164   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:06:08.630181   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:08.630428   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:08.630558   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetState
	I1101 01:06:08.631892   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:06:08.632176   59148 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 01:06:08.632197   59148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 01:06:08.632216   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:06:08.634872   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:06:08.635211   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:06:08.635241   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:06:08.635375   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:06:08.635576   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:06:08.635713   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:06:08.635839   59148 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa Username:docker}
	I1101 01:06:08.984005   59148 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 01:06:08.984032   59148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1101 01:06:08.991838   59148 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-639310" to be "Ready" ...
	I1101 01:06:08.991921   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 01:06:09.011096   59148 node_ready.go:49] node "default-k8s-diff-port-639310" has status "Ready":"True"
	I1101 01:06:09.011124   59148 node_ready.go:38] duration metric: took 19.250763ms waiting for node "default-k8s-diff-port-639310" to be "Ready" ...
	I1101 01:06:09.011136   59148 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:06:09.043526   59148 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:09.071032   59148 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 01:06:09.071065   59148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 01:06:09.089683   59148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 01:06:09.090332   59148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:06:09.139676   59148 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:06:09.139702   59148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 01:06:09.219436   59148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:06:06.705499   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:09.207584   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:09.922465   58676 pod_ready.go:81] duration metric: took 4m0.000913678s waiting for pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace to be "Ready" ...
	E1101 01:06:09.922511   58676 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1101 01:06:09.922529   58676 pod_ready.go:38] duration metric: took 4m11.570999497s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:06:09.922566   58676 kubeadm.go:640] restartCluster took 4m30.866358786s
	W1101 01:06:09.922644   58676 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1101 01:06:09.922688   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1101 01:06:11.075881   59148 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.083916099s)
	I1101 01:06:11.075915   59148 start.go:926] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1101 01:06:11.075946   59148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.986221728s)
	I1101 01:06:11.075997   59148 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:11.076012   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Close
	I1101 01:06:11.076348   59148 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:11.076367   59148 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:11.076377   59148 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:11.076386   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Close
	I1101 01:06:11.076620   59148 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:11.076639   59148 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:11.119713   59148 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:11.119741   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Close
	I1101 01:06:11.120145   59148 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:11.120170   59148 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:11.120145   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | Closing plugin on server side
	I1101 01:06:11.172242   59148 pod_ready.go:102] pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:11.954880   59148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.864508967s)
	I1101 01:06:11.954945   59148 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:11.954960   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Close
	I1101 01:06:11.955014   59148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.735537793s)
	I1101 01:06:11.955074   59148 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:11.955088   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Close
	I1101 01:06:11.955379   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | Closing plugin on server side
	I1101 01:06:11.955411   59148 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:11.955418   59148 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:11.955429   59148 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:11.955438   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Close
	I1101 01:06:11.957487   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | Closing plugin on server side
	I1101 01:06:11.957532   59148 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:11.957549   59148 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:11.957537   59148 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:11.957612   59148 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:11.957566   59148 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-639310"
	I1101 01:06:11.957643   59148 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:11.957672   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Close
	I1101 01:06:11.958036   59148 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:11.958063   59148 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:11.960489   59148 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I1101 01:06:07.899402   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:08.398731   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:08.898547   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:09.399015   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:09.898437   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:10.399024   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:10.899108   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:11.398482   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:11.898943   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:12.399022   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:11.962129   59148 addons.go:502] enable addons completed in 3.39833009s: enabled=[default-storageclass metrics-server storage-provisioner]
	I1101 01:06:13.684297   59148 pod_ready.go:102] pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:12.899212   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:13.398415   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:13.898444   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:14.398630   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:14.898427   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:15.399212   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:15.898869   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:16.399289   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:16.588122   58823 kubeadm.go:1081] duration metric: took 15.28901357s to wait for elevateKubeSystemPrivileges.
	I1101 01:06:16.588166   58823 kubeadm.go:406] StartCluster complete in 5m31.002121514s
	I1101 01:06:16.588190   58823 settings.go:142] acquiring lock: {Name:mk7f269e64dfd8d176737f993e01f6e6badafbc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:06:16.588290   58823 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 01:06:16.590925   58823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/kubeconfig: {Name:mk08da65b6c71084e1cfafb19800038e8c8303e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:06:16.591235   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 01:06:16.591339   58823 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1101 01:06:16.591416   58823 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-330042"
	I1101 01:06:16.591436   58823 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-330042"
	W1101 01:06:16.591444   58823 addons.go:240] addon storage-provisioner should already be in state true
	I1101 01:06:16.591477   58823 config.go:182] Loaded profile config "old-k8s-version-330042": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1101 01:06:16.591517   58823 host.go:66] Checking if "old-k8s-version-330042" exists ...
	I1101 01:06:16.591525   58823 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-330042"
	I1101 01:06:16.591541   58823 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-330042"
	I1101 01:06:16.591923   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:16.591924   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:16.591962   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:16.591980   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:16.592045   58823 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-330042"
	I1101 01:06:16.592064   58823 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-330042"
	W1101 01:06:16.592071   58823 addons.go:240] addon metrics-server should already be in state true
	I1101 01:06:16.592104   58823 host.go:66] Checking if "old-k8s-version-330042" exists ...
	I1101 01:06:16.592424   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:16.592468   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:16.610602   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35459
	I1101 01:06:16.611188   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:16.611722   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:06:16.611752   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:16.611893   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35425
	I1101 01:06:16.612233   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:16.612315   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:16.612802   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:16.612841   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:16.613196   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:06:16.613215   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:16.613550   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:16.613571   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39319
	I1101 01:06:16.613949   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:16.614126   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:16.614159   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:16.614425   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:06:16.614438   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:16.614811   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:16.614997   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetState
	I1101 01:06:16.617747   58823 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-330042"
	W1101 01:06:16.617763   58823 addons.go:240] addon default-storageclass should already be in state true
	I1101 01:06:16.617783   58823 host.go:66] Checking if "old-k8s-version-330042" exists ...
	I1101 01:06:16.618021   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:16.618044   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:16.633877   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37903
	I1101 01:06:16.634227   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34049
	I1101 01:06:16.634436   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:16.635052   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:16.635225   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:06:16.635251   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:16.635588   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:06:16.635603   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:16.635656   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:16.636032   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:16.636092   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetState
	I1101 01:06:16.636310   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetState
	I1101 01:06:16.637897   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:06:16.640069   58823 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:06:16.638479   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:06:16.640887   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35501
	I1101 01:06:16.641511   58823 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:06:16.641523   58823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 01:06:16.641540   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:06:16.642477   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:16.643099   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:06:16.643115   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:16.643826   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:16.644397   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:16.644432   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:16.644515   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:06:16.644534   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:06:16.644549   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:06:16.644743   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:06:16.644908   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:06:16.645006   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:06:16.645102   58823 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa Username:docker}
	I1101 01:06:16.648901   58823 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1101 01:06:16.650287   58823 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 01:06:16.650299   58823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 01:06:16.650316   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:06:16.654323   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:06:16.654694   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:06:16.654720   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:06:16.655020   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:06:16.655268   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:06:16.655450   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:06:16.655600   58823 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa Username:docker}
	I1101 01:06:16.663888   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32991
	I1101 01:06:16.664490   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:16.665023   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:06:16.665049   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:16.665533   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:16.665720   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetState
	I1101 01:06:16.667516   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:06:16.667817   58823 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 01:06:16.667837   58823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 01:06:16.667856   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:06:16.670789   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:06:16.671306   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:06:16.671332   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:06:16.671519   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:06:16.671688   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:06:16.671811   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:06:16.671974   58823 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa Username:docker}
	I1101 01:06:16.738145   58823 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-330042" context rescaled to 1 replicas
	I1101 01:06:16.738193   58823 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 01:06:16.740269   58823 out.go:177] * Verifying Kubernetes components...
	I1101 01:06:16.741889   58823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:06:16.827316   58823 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 01:06:16.827347   58823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1101 01:06:16.846888   58823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:06:16.868760   58823 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-330042" to be "Ready" ...
	I1101 01:06:16.868848   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 01:06:16.885920   58823 node_ready.go:49] node "old-k8s-version-330042" has status "Ready":"True"
	I1101 01:06:16.885962   58823 node_ready.go:38] duration metric: took 17.171382ms waiting for node "old-k8s-version-330042" to be "Ready" ...
	I1101 01:06:16.885975   58823 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:06:16.907070   58823 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-v2xlz" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:16.929166   58823 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 01:06:16.929190   58823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 01:06:16.946209   58823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 01:06:17.010599   58823 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:06:17.010628   58823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 01:06:17.132054   58823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:06:17.868039   58823 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1101 01:06:17.868039   58823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.021104248s)
	I1101 01:06:17.868120   58823 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:17.868126   58823 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:17.868140   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Close
	I1101 01:06:17.868142   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Close
	I1101 01:06:17.870315   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Closing plugin on server side
	I1101 01:06:17.870338   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Closing plugin on server side
	I1101 01:06:17.870352   58823 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:17.870364   58823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:17.870378   58823 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:17.870400   58823 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:17.870429   58823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:17.870439   58823 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:17.870448   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Close
	I1101 01:06:17.870470   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Close
	I1101 01:06:17.870865   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Closing plugin on server side
	I1101 01:06:17.870866   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Closing plugin on server side
	I1101 01:06:17.870876   58823 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:17.870890   58823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:17.870899   58823 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:17.870915   58823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:17.920542   58823 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:17.920570   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Close
	I1101 01:06:17.920923   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Closing plugin on server side
	I1101 01:06:17.920969   58823 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:17.920980   58823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:18.189030   58823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.056928538s)
	I1101 01:06:18.189096   58823 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:18.189109   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Close
	I1101 01:06:18.189446   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Closing plugin on server side
	I1101 01:06:18.189464   58823 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:18.189476   58823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:18.189486   58823 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:18.189506   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Close
	I1101 01:06:18.189735   58823 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:18.189752   58823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:18.189760   58823 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-330042"
	I1101 01:06:18.192103   58823 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1101 01:06:16.156689   59148 pod_ready.go:102] pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:18.158318   59148 pod_ready.go:102] pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:18.194035   58823 addons.go:502] enable addons completed in 1.602699312s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1101 01:06:18.978162   58823 pod_ready.go:102] pod "coredns-5644d7b6d9-v2xlz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:21.456448   58823 pod_ready.go:102] pod "coredns-5644d7b6d9-v2xlz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:20.657398   59148 pod_ready.go:102] pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:22.156680   59148 pod_ready.go:97] pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.72.97 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-11-01 01:06:08 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerSt
ateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-11-01 01:06:11 +0000 UTC,FinishedAt:2023-11-01 01:06:21 +0000 UTC,ContainerID:cri-o://1ecc4b16207e32548d5d59a4bb7a01519d7e5eaf75b05171abd6c8c635656811,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://1ecc4b16207e32548d5d59a4bb7a01519d7e5eaf75b05171abd6c8c635656811 Started:0xc002af16c0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1101 01:06:22.156709   59148 pod_ready.go:81] duration metric: took 13.113156669s waiting for pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace to be "Ready" ...
	E1101 01:06:22.156718   59148 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.72.97 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-11-01 01:06:08 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Runnin
g:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-11-01 01:06:11 +0000 UTC,FinishedAt:2023-11-01 01:06:21 +0000 UTC,ContainerID:cri-o://1ecc4b16207e32548d5d59a4bb7a01519d7e5eaf75b05171abd6c8c635656811,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://1ecc4b16207e32548d5d59a4bb7a01519d7e5eaf75b05171abd6c8c635656811 Started:0xc002af16c0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1101 01:06:22.156726   59148 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rgzt8" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.163387   59148 pod_ready.go:92] pod "coredns-5dd5756b68-rgzt8" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:22.163410   59148 pod_ready.go:81] duration metric: took 6.677078ms waiting for pod "coredns-5dd5756b68-rgzt8" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.163423   59148 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.168499   59148 pod_ready.go:92] pod "etcd-default-k8s-diff-port-639310" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:22.168519   59148 pod_ready.go:81] duration metric: took 5.088683ms waiting for pod "etcd-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.168528   59148 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.174117   59148 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-639310" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:22.174143   59148 pod_ready.go:81] duration metric: took 5.607251ms waiting for pod "kube-apiserver-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.174157   59148 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.179321   59148 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-639310" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:22.179344   59148 pod_ready.go:81] duration metric: took 5.178241ms waiting for pod "kube-controller-manager-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.179356   59148 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kzgzn" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.554016   59148 pod_ready.go:92] pod "kube-proxy-kzgzn" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:22.554047   59148 pod_ready.go:81] duration metric: took 374.683914ms waiting for pod "kube-proxy-kzgzn" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.554061   59148 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.954192   59148 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-639310" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:22.954216   59148 pod_ready.go:81] duration metric: took 400.146517ms waiting for pod "kube-scheduler-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.954226   59148 pod_ready.go:38] duration metric: took 13.943077925s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:06:22.954243   59148 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:06:22.954294   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:06:22.970594   59148 api_server.go:72] duration metric: took 14.351804953s to wait for apiserver process to appear ...
	I1101 01:06:22.970621   59148 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:06:22.970638   59148 api_server.go:253] Checking apiserver healthz at https://192.168.72.97:8444/healthz ...
	I1101 01:06:22.976061   59148 api_server.go:279] https://192.168.72.97:8444/healthz returned 200:
	ok
	I1101 01:06:22.977368   59148 api_server.go:141] control plane version: v1.28.3
	I1101 01:06:22.977390   59148 api_server.go:131] duration metric: took 6.761145ms to wait for apiserver health ...
	I1101 01:06:22.977398   59148 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:06:23.156987   59148 system_pods.go:59] 8 kube-system pods found
	I1101 01:06:23.157014   59148 system_pods.go:61] "coredns-5dd5756b68-rgzt8" [6d136c6a-e0b2-44c3-a17b-85649d6ff7b7] Running
	I1101 01:06:23.157018   59148 system_pods.go:61] "etcd-default-k8s-diff-port-639310" [9cc2eba7-c72f-4a6f-9c55-8cce5586b574] Running
	I1101 01:06:23.157024   59148 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-639310" [e2b16d1e-af9f-452e-8243-5267f781ab19] Running
	I1101 01:06:23.157028   59148 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-639310" [9173e21f-a13f-4234-94a1-1976881ee23d] Running
	I1101 01:06:23.157034   59148 system_pods.go:61] "kube-proxy-kzgzn" [32d59980-f28a-482c-9aa8-8502915417f0] Running
	I1101 01:06:23.157038   59148 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-639310" [449df462-911a-4afa-8ca5-f9fccce9ecac] Running
	I1101 01:06:23.157046   59148 system_pods.go:61] "metrics-server-57f55c9bc5-65ph4" [4683706e-65f6-4845-a5ad-60da8cd20d8e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:23.157053   59148 system_pods.go:61] "storage-provisioner" [eaba9583-e564-4804-9cd3-2b4de36c85da] Running
	I1101 01:06:23.157060   59148 system_pods.go:74] duration metric: took 179.656649ms to wait for pod list to return data ...
	I1101 01:06:23.157067   59148 default_sa.go:34] waiting for default service account to be created ...
	I1101 01:06:23.352990   59148 default_sa.go:45] found service account: "default"
	I1101 01:06:23.353024   59148 default_sa.go:55] duration metric: took 195.950242ms for default service account to be created ...
	I1101 01:06:23.353034   59148 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 01:06:23.557472   59148 system_pods.go:86] 8 kube-system pods found
	I1101 01:06:23.557498   59148 system_pods.go:89] "coredns-5dd5756b68-rgzt8" [6d136c6a-e0b2-44c3-a17b-85649d6ff7b7] Running
	I1101 01:06:23.557505   59148 system_pods.go:89] "etcd-default-k8s-diff-port-639310" [9cc2eba7-c72f-4a6f-9c55-8cce5586b574] Running
	I1101 01:06:23.557512   59148 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-639310" [e2b16d1e-af9f-452e-8243-5267f781ab19] Running
	I1101 01:06:23.557518   59148 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-639310" [9173e21f-a13f-4234-94a1-1976881ee23d] Running
	I1101 01:06:23.557524   59148 system_pods.go:89] "kube-proxy-kzgzn" [32d59980-f28a-482c-9aa8-8502915417f0] Running
	I1101 01:06:23.557531   59148 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-639310" [449df462-911a-4afa-8ca5-f9fccce9ecac] Running
	I1101 01:06:23.557541   59148 system_pods.go:89] "metrics-server-57f55c9bc5-65ph4" [4683706e-65f6-4845-a5ad-60da8cd20d8e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:23.557554   59148 system_pods.go:89] "storage-provisioner" [eaba9583-e564-4804-9cd3-2b4de36c85da] Running
	I1101 01:06:23.557561   59148 system_pods.go:126] duration metric: took 204.522772ms to wait for k8s-apps to be running ...
	I1101 01:06:23.557571   59148 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 01:06:23.557614   59148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:06:23.572950   59148 system_svc.go:56] duration metric: took 15.367105ms WaitForService to wait for kubelet.
	I1101 01:06:23.572979   59148 kubeadm.go:581] duration metric: took 14.954198383s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1101 01:06:23.572995   59148 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:06:23.754816   59148 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:06:23.754852   59148 node_conditions.go:123] node cpu capacity is 2
	I1101 01:06:23.754865   59148 node_conditions.go:105] duration metric: took 181.864765ms to run NodePressure ...
	I1101 01:06:23.754879   59148 start.go:228] waiting for startup goroutines ...
	I1101 01:06:23.754887   59148 start.go:233] waiting for cluster config update ...
	I1101 01:06:23.754902   59148 start.go:242] writing updated cluster config ...
	I1101 01:06:23.755221   59148 ssh_runner.go:195] Run: rm -f paused
	I1101 01:06:23.805298   59148 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1101 01:06:23.807226   59148 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-639310" cluster and "default" namespace by default
	I1101 01:06:24.353352   58676 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.430634921s)
	I1101 01:06:24.353418   58676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:06:24.367115   58676 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:06:24.376272   58676 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:06:24.385067   58676 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:06:24.385105   58676 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1101 01:06:24.436586   58676 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1101 01:06:24.436698   58676 kubeadm.go:322] [preflight] Running pre-flight checks
	I1101 01:06:24.592267   58676 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 01:06:24.592409   58676 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 01:06:24.592529   58676 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 01:06:24.834834   58676 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 01:06:24.836680   58676 out.go:204]   - Generating certificates and keys ...
	I1101 01:06:24.836825   58676 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1101 01:06:24.836918   58676 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1101 01:06:24.837052   58676 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1101 01:06:24.837150   58676 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1101 01:06:24.837378   58676 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1101 01:06:24.838501   58676 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1101 01:06:24.838970   58676 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1101 01:06:24.839488   58676 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1101 01:06:24.840058   58676 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1101 01:06:24.840454   58676 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1101 01:06:24.840925   58676 kubeadm.go:322] [certs] Using the existing "sa" key
	I1101 01:06:24.841017   58676 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 01:06:25.117460   58676 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 01:06:25.218894   58676 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 01:06:25.319416   58676 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 01:06:25.555023   58676 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 01:06:25.555490   58676 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 01:06:25.558041   58676 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 01:06:25.559946   58676 out.go:204]   - Booting up control plane ...
	I1101 01:06:25.560090   58676 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 01:06:25.560212   58676 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 01:06:25.560321   58676 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 01:06:25.577307   58676 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 01:06:25.580427   58676 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 01:06:25.580508   58676 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1101 01:06:25.710362   58676 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 01:06:23.963710   58823 pod_ready.go:102] pod "coredns-5644d7b6d9-v2xlz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:26.455851   58823 pod_ready.go:92] pod "coredns-5644d7b6d9-v2xlz" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:26.455880   58823 pod_ready.go:81] duration metric: took 9.548782268s waiting for pod "coredns-5644d7b6d9-v2xlz" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:26.455889   58823 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hkl2m" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:26.461243   58823 pod_ready.go:92] pod "kube-proxy-hkl2m" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:26.461277   58823 pod_ready.go:81] duration metric: took 5.380815ms waiting for pod "kube-proxy-hkl2m" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:26.461289   58823 pod_ready.go:38] duration metric: took 9.575303239s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:06:26.461314   58823 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:06:26.461372   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:06:26.476212   58823 api_server.go:72] duration metric: took 9.737981323s to wait for apiserver process to appear ...
	I1101 01:06:26.476245   58823 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:06:26.476268   58823 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I1101 01:06:26.483060   58823 api_server.go:279] https://192.168.39.90:8443/healthz returned 200:
	ok
	I1101 01:06:26.484299   58823 api_server.go:141] control plane version: v1.16.0
	I1101 01:06:26.484328   58823 api_server.go:131] duration metric: took 8.074303ms to wait for apiserver health ...
	I1101 01:06:26.484342   58823 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:06:26.488710   58823 system_pods.go:59] 4 kube-system pods found
	I1101 01:06:26.488745   58823 system_pods.go:61] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:26.488753   58823 system_pods.go:61] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:26.488766   58823 system_pods.go:61] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:26.488775   58823 system_pods.go:61] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:26.488787   58823 system_pods.go:74] duration metric: took 4.438458ms to wait for pod list to return data ...
	I1101 01:06:26.488797   58823 default_sa.go:34] waiting for default service account to be created ...
	I1101 01:06:26.492513   58823 default_sa.go:45] found service account: "default"
	I1101 01:06:26.492543   58823 default_sa.go:55] duration metric: took 3.739583ms for default service account to be created ...
	I1101 01:06:26.492553   58823 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 01:06:26.496897   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:26.496924   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:26.496929   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:26.496936   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:26.496942   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:26.496956   58823 retry.go:31] will retry after 215.348005ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:26.718021   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:26.718055   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:26.718064   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:26.718080   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:26.718086   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:26.718103   58823 retry.go:31] will retry after 357.067185ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:27.080480   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:27.080519   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:27.080528   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:27.080539   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:27.080548   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:27.080572   58823 retry.go:31] will retry after 441.083478ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:27.528922   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:27.528955   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:27.528964   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:27.528975   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:27.528984   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:27.529008   58823 retry.go:31] will retry after 595.152055ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:28.129735   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:28.129760   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:28.129765   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:28.129772   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:28.129778   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:28.129794   58823 retry.go:31] will retry after 591.454083ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:28.726058   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:28.726089   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:28.726097   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:28.726108   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:28.726118   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:28.726142   58823 retry.go:31] will retry after 682.338416ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:29.414282   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:29.414311   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:29.414321   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:29.414330   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:29.414338   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:29.414356   58823 retry.go:31] will retry after 953.248535ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:30.373950   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:30.373989   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:30.373998   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:30.374017   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:30.374028   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:30.374048   58823 retry.go:31] will retry after 1.291166145s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:31.671462   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:31.671516   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:31.671526   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:31.671537   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:31.671546   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:31.671565   58823 retry.go:31] will retry after 1.413833897s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:33.713596   58676 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002646 seconds
	I1101 01:06:33.713733   58676 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 01:06:33.731994   58676 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 01:06:34.275298   58676 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 01:06:34.275497   58676 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-008483 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 01:06:34.792259   58676 kubeadm.go:322] [bootstrap-token] Using token: ft1765.cra2ecqpjz8r5s0a
	I1101 01:06:34.793944   58676 out.go:204]   - Configuring RBAC rules ...
	I1101 01:06:34.794105   58676 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 01:06:34.800902   58676 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 01:06:34.811310   58676 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 01:06:34.821309   58676 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 01:06:34.826523   58676 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 01:06:34.832305   58676 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 01:06:34.852131   58676 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 01:06:35.137771   58676 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1101 01:06:35.206006   58676 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1101 01:06:35.207223   58676 kubeadm.go:322] 
	I1101 01:06:35.207316   58676 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1101 01:06:35.207327   58676 kubeadm.go:322] 
	I1101 01:06:35.207404   58676 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1101 01:06:35.207413   58676 kubeadm.go:322] 
	I1101 01:06:35.207448   58676 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1101 01:06:35.207528   58676 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 01:06:35.207619   58676 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 01:06:35.207640   58676 kubeadm.go:322] 
	I1101 01:06:35.207703   58676 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1101 01:06:35.207722   58676 kubeadm.go:322] 
	I1101 01:06:35.207796   58676 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 01:06:35.207805   58676 kubeadm.go:322] 
	I1101 01:06:35.207878   58676 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1101 01:06:35.208001   58676 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 01:06:35.208102   58676 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 01:06:35.208111   58676 kubeadm.go:322] 
	I1101 01:06:35.208207   58676 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 01:06:35.208314   58676 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1101 01:06:35.208337   58676 kubeadm.go:322] 
	I1101 01:06:35.208459   58676 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ft1765.cra2ecqpjz8r5s0a \
	I1101 01:06:35.208636   58676 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 \
	I1101 01:06:35.208674   58676 kubeadm.go:322] 	--control-plane 
	I1101 01:06:35.208687   58676 kubeadm.go:322] 
	I1101 01:06:35.208812   58676 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1101 01:06:35.208823   58676 kubeadm.go:322] 
	I1101 01:06:35.208936   58676 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ft1765.cra2ecqpjz8r5s0a \
	I1101 01:06:35.209057   58676 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 
	I1101 01:06:35.209758   58676 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 01:06:35.209780   58676 cni.go:84] Creating CNI manager for ""
	I1101 01:06:35.209790   58676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:06:35.211735   58676 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:06:35.213123   58676 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:06:35.235025   58676 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:06:35.271015   58676 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 01:06:35.271092   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9 minikube.k8s.io/name=no-preload-008483 minikube.k8s.io/updated_at=2023_11_01T01_06_35_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:35.271099   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:35.305061   58676 ops.go:34] apiserver oom_adj: -16
	I1101 01:06:35.663339   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:35.805680   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:33.090990   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:33.091030   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:33.091038   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:33.091049   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:33.091060   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:33.091078   58823 retry.go:31] will retry after 2.252641435s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:35.350673   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:35.350703   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:35.350711   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:35.350722   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:35.350735   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:35.350753   58823 retry.go:31] will retry after 2.131984659s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:36.402770   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:36.902353   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:37.402763   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:37.902598   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:38.401883   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:38.902775   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:39.402062   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:39.902544   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:40.402350   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:40.901853   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:37.489100   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:37.489127   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:37.489132   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:37.489141   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:37.489151   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:37.489169   58823 retry.go:31] will retry after 3.273821759s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:40.767389   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:40.767409   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:40.767414   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:40.767421   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:40.767427   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:40.767441   58823 retry.go:31] will retry after 4.351278698s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:41.402632   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:41.901859   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:42.402379   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:42.902816   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:43.402503   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:43.902158   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:44.402562   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:44.901867   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:45.401852   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:45.902865   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:45.124108   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:45.124138   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:45.124147   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:45.124158   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:45.124166   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:45.124184   58823 retry.go:31] will retry after 4.53047058s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:46.402463   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:46.902480   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:47.402022   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:47.568628   58676 kubeadm.go:1081] duration metric: took 12.297606595s to wait for elevateKubeSystemPrivileges.
	I1101 01:06:47.568672   58676 kubeadm.go:406] StartCluster complete in 5m8.570526689s
	I1101 01:06:47.568696   58676 settings.go:142] acquiring lock: {Name:mk7f269e64dfd8d176737f993e01f6e6badafbc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:06:47.568787   58676 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 01:06:47.570839   58676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/kubeconfig: {Name:mk08da65b6c71084e1cfafb19800038e8c8303e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:06:47.571093   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 01:06:47.571207   58676 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1101 01:06:47.571281   58676 addons.go:69] Setting storage-provisioner=true in profile "no-preload-008483"
	I1101 01:06:47.571307   58676 addons.go:69] Setting metrics-server=true in profile "no-preload-008483"
	I1101 01:06:47.571329   58676 addons.go:231] Setting addon metrics-server=true in "no-preload-008483"
	I1101 01:06:47.571345   58676 config.go:182] Loaded profile config "no-preload-008483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:06:47.571360   58676 addons.go:69] Setting default-storageclass=true in profile "no-preload-008483"
	I1101 01:06:47.571369   58676 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-008483"
	W1101 01:06:47.571348   58676 addons.go:240] addon metrics-server should already be in state true
	I1101 01:06:47.571441   58676 host.go:66] Checking if "no-preload-008483" exists ...
	I1101 01:06:47.571312   58676 addons.go:231] Setting addon storage-provisioner=true in "no-preload-008483"
	W1101 01:06:47.571490   58676 addons.go:240] addon storage-provisioner should already be in state true
	I1101 01:06:47.571527   58676 host.go:66] Checking if "no-preload-008483" exists ...
	I1101 01:06:47.571816   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:47.571815   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:47.571873   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:47.571892   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:47.571873   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:47.572006   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:47.590259   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39063
	I1101 01:06:47.590724   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:47.591055   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39635
	I1101 01:06:47.591202   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:06:47.591220   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:47.591229   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46549
	I1101 01:06:47.591621   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:47.591707   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:47.591743   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:47.592428   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:47.592471   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:47.592794   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:06:47.592808   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:06:47.592822   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:47.592826   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:47.593236   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:47.593283   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:47.593437   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetState
	I1101 01:06:47.593927   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:47.593966   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:47.598345   58676 addons.go:231] Setting addon default-storageclass=true in "no-preload-008483"
	W1101 01:06:47.598381   58676 addons.go:240] addon default-storageclass should already be in state true
	I1101 01:06:47.598413   58676 host.go:66] Checking if "no-preload-008483" exists ...
	I1101 01:06:47.598819   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:47.598871   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:47.613965   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43751
	I1101 01:06:47.614004   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40855
	I1101 01:06:47.614542   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:47.614669   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:47.615105   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:06:47.615121   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:47.615151   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:06:47.615189   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:47.615476   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:47.615537   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:47.615690   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetState
	I1101 01:06:47.615767   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetState
	I1101 01:06:47.617847   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:06:47.620144   58676 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:06:47.618264   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45253
	I1101 01:06:47.618444   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:06:47.621319   58676 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-008483" context rescaled to 1 replicas
	I1101 01:06:47.621520   58676 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.140 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 01:06:47.623048   58676 out.go:177] * Verifying Kubernetes components...
	I1101 01:06:47.621641   58676 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:06:47.621894   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:47.625008   58676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 01:06:47.625024   58676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:06:47.626461   58676 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1101 01:06:47.628411   58676 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 01:06:47.628425   58676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 01:06:47.628439   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:06:47.626617   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:06:47.627063   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:06:47.628510   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:47.628907   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:47.629438   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:47.629480   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:47.631968   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:06:47.632175   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:06:47.632212   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:06:47.632315   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:06:47.632508   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:06:47.632679   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:06:47.632739   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:06:47.632795   58676 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa Username:docker}
	I1101 01:06:47.633383   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:06:47.633403   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:06:47.633427   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:06:47.633584   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:06:47.633708   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:06:47.633891   58676 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa Username:docker}
	I1101 01:06:47.650937   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41791
	I1101 01:06:47.651372   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:47.651921   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:06:47.651956   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:47.652322   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:47.652536   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetState
	I1101 01:06:47.654393   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:06:47.654706   58676 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 01:06:47.654721   58676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 01:06:47.654743   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:06:47.657743   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:06:47.658176   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:06:47.658204   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:06:47.658448   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:06:47.658673   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:06:47.658836   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:06:47.659008   58676 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa Username:docker}
	I1101 01:06:47.808648   58676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:06:47.837158   58676 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 01:06:47.837181   58676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1101 01:06:47.846004   58676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 01:06:47.882427   58676 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 01:06:47.882454   58676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 01:06:47.899419   58676 node_ready.go:35] waiting up to 6m0s for node "no-preload-008483" to be "Ready" ...
	I1101 01:06:47.899496   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 01:06:47.919788   58676 node_ready.go:49] node "no-preload-008483" has status "Ready":"True"
	I1101 01:06:47.919821   58676 node_ready.go:38] duration metric: took 20.370648ms waiting for node "no-preload-008483" to be "Ready" ...
	I1101 01:06:47.919836   58676 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:06:47.926205   58676 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:06:47.926232   58676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 01:06:47.930715   58676 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5tp9h" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:47.982413   58676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:06:49.813480   58676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.004790768s)
	I1101 01:06:49.813519   58676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.967476056s)
	I1101 01:06:49.813564   58676 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:49.813588   58676 main.go:141] libmachine: (no-preload-008483) Calling .Close
	I1101 01:06:49.813528   58676 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:49.813617   58676 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.914052615s)
	I1101 01:06:49.813634   58676 main.go:141] libmachine: (no-preload-008483) Calling .Close
	I1101 01:06:49.813643   58676 start.go:926] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1101 01:06:49.813924   58676 main.go:141] libmachine: (no-preload-008483) DBG | Closing plugin on server side
	I1101 01:06:49.813935   58676 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:49.813956   58676 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:49.813970   58676 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:49.813979   58676 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:49.813980   58676 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:49.813990   58676 main.go:141] libmachine: (no-preload-008483) Calling .Close
	I1101 01:06:49.813991   58676 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:49.814014   58676 main.go:141] libmachine: (no-preload-008483) Calling .Close
	I1101 01:06:49.814239   58676 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:49.814258   58676 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:49.814321   58676 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:49.814339   58676 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:49.857721   58676 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:49.857749   58676 main.go:141] libmachine: (no-preload-008483) Calling .Close
	I1101 01:06:49.858034   58676 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:49.858053   58676 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:50.026844   58676 pod_ready.go:97] error getting pod "coredns-5dd5756b68-5tp9h" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-5tp9h" not found
	I1101 01:06:50.026876   58676 pod_ready.go:81] duration metric: took 2.096134316s waiting for pod "coredns-5dd5756b68-5tp9h" in "kube-system" namespace to be "Ready" ...
	E1101 01:06:50.026888   58676 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-5tp9h" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-5tp9h" not found
	I1101 01:06:50.026898   58676 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-m8v7v" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:50.204452   58676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.22199218s)
	I1101 01:06:50.204543   58676 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:50.204561   58676 main.go:141] libmachine: (no-preload-008483) Calling .Close
	I1101 01:06:50.204896   58676 main.go:141] libmachine: (no-preload-008483) DBG | Closing plugin on server side
	I1101 01:06:50.204985   58676 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:50.205017   58676 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:50.205046   58676 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:50.205064   58676 main.go:141] libmachine: (no-preload-008483) Calling .Close
	I1101 01:06:50.205339   58676 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:50.205360   58676 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:50.205371   58676 addons.go:467] Verifying addon metrics-server=true in "no-preload-008483"
	I1101 01:06:50.205393   58676 main.go:141] libmachine: (no-preload-008483) DBG | Closing plugin on server side
	I1101 01:06:50.207552   58676 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1101 01:06:50.208879   58676 addons.go:502] enable addons completed in 2.637673191s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1101 01:06:49.663546   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:49.663578   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:49.663585   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:49.663595   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:49.663604   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:49.663623   58823 retry.go:31] will retry after 5.557220121s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:52.106184   58676 pod_ready.go:92] pod "coredns-5dd5756b68-m8v7v" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:52.106208   58676 pod_ready.go:81] duration metric: took 2.079304042s waiting for pod "coredns-5dd5756b68-m8v7v" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.106218   58676 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.112508   58676 pod_ready.go:92] pod "etcd-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:52.112531   58676 pod_ready.go:81] duration metric: took 6.307404ms waiting for pod "etcd-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.112540   58676 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.119263   58676 pod_ready.go:92] pod "kube-apiserver-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:52.119296   58676 pod_ready.go:81] duration metric: took 6.748553ms waiting for pod "kube-apiserver-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.119311   58676 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.125594   58676 pod_ready.go:92] pod "kube-controller-manager-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:52.125619   58676 pod_ready.go:81] duration metric: took 6.30051ms waiting for pod "kube-controller-manager-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.125629   58676 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4cx5t" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.503777   58676 pod_ready.go:92] pod "kube-proxy-4cx5t" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:52.503802   58676 pod_ready.go:81] duration metric: took 378.166648ms waiting for pod "kube-proxy-4cx5t" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.503811   58676 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.904254   58676 pod_ready.go:92] pod "kube-scheduler-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:52.904275   58676 pod_ready.go:81] duration metric: took 400.457426ms waiting for pod "kube-scheduler-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.904284   58676 pod_ready.go:38] duration metric: took 4.984437509s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:06:52.904303   58676 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:06:52.904352   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:06:52.917549   58676 api_server.go:72] duration metric: took 5.295984843s to wait for apiserver process to appear ...
	I1101 01:06:52.917576   58676 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:06:52.917595   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:06:52.926515   58676 api_server.go:279] https://192.168.50.140:8443/healthz returned 200:
	ok
	I1101 01:06:52.927673   58676 api_server.go:141] control plane version: v1.28.3
	I1101 01:06:52.927692   58676 api_server.go:131] duration metric: took 10.109726ms to wait for apiserver health ...
	I1101 01:06:52.927700   58676 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:06:53.109620   58676 system_pods.go:59] 8 kube-system pods found
	I1101 01:06:53.109648   58676 system_pods.go:61] "coredns-5dd5756b68-m8v7v" [351a9458-075b-40d1-96d1-86a450a99251] Running
	I1101 01:06:53.109653   58676 system_pods.go:61] "etcd-no-preload-008483" [e1db4a59-f5e6-4114-a942-1faf4ff84af2] Running
	I1101 01:06:53.109657   58676 system_pods.go:61] "kube-apiserver-no-preload-008483" [f8f8bb39-3093-44bb-8255-5a7d78437a75] Running
	I1101 01:06:53.109661   58676 system_pods.go:61] "kube-controller-manager-no-preload-008483" [a45df9e4-3399-4c21-981f-3c3caaed52a8] Running
	I1101 01:06:53.109665   58676 system_pods.go:61] "kube-proxy-4cx5t" [57c1e87a-aa14-440d-9001-a6ba2ab7c8c6] Running
	I1101 01:06:53.109670   58676 system_pods.go:61] "kube-scheduler-no-preload-008483" [329b7a2d-6146-4e08-910e-ed4d40f57dcb] Running
	I1101 01:06:53.109676   58676 system_pods.go:61] "metrics-server-57f55c9bc5-qcxt7" [bf444b92-dd54-43fc-a9a8-0e9000b562e3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:53.109684   58676 system_pods.go:61] "storage-provisioner" [909163da-9021-4cee-9a72-1bc9b6ae9390] Running
	I1101 01:06:53.109693   58676 system_pods.go:74] duration metric: took 181.986766ms to wait for pod list to return data ...
	I1101 01:06:53.109706   58676 default_sa.go:34] waiting for default service account to be created ...
	I1101 01:06:53.305872   58676 default_sa.go:45] found service account: "default"
	I1101 01:06:53.305904   58676 default_sa.go:55] duration metric: took 196.187269ms for default service account to be created ...
	I1101 01:06:53.305919   58676 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 01:06:53.506566   58676 system_pods.go:86] 8 kube-system pods found
	I1101 01:06:53.506601   58676 system_pods.go:89] "coredns-5dd5756b68-m8v7v" [351a9458-075b-40d1-96d1-86a450a99251] Running
	I1101 01:06:53.506610   58676 system_pods.go:89] "etcd-no-preload-008483" [e1db4a59-f5e6-4114-a942-1faf4ff84af2] Running
	I1101 01:06:53.506618   58676 system_pods.go:89] "kube-apiserver-no-preload-008483" [f8f8bb39-3093-44bb-8255-5a7d78437a75] Running
	I1101 01:06:53.506625   58676 system_pods.go:89] "kube-controller-manager-no-preload-008483" [a45df9e4-3399-4c21-981f-3c3caaed52a8] Running
	I1101 01:06:53.506631   58676 system_pods.go:89] "kube-proxy-4cx5t" [57c1e87a-aa14-440d-9001-a6ba2ab7c8c6] Running
	I1101 01:06:53.506640   58676 system_pods.go:89] "kube-scheduler-no-preload-008483" [329b7a2d-6146-4e08-910e-ed4d40f57dcb] Running
	I1101 01:06:53.506651   58676 system_pods.go:89] "metrics-server-57f55c9bc5-qcxt7" [bf444b92-dd54-43fc-a9a8-0e9000b562e3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:53.506664   58676 system_pods.go:89] "storage-provisioner" [909163da-9021-4cee-9a72-1bc9b6ae9390] Running
	I1101 01:06:53.506675   58676 system_pods.go:126] duration metric: took 200.749464ms to wait for k8s-apps to be running ...
	I1101 01:06:53.506692   58676 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 01:06:53.506747   58676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:06:53.519471   58676 system_svc.go:56] duration metric: took 12.766173ms WaitForService to wait for kubelet.
	I1101 01:06:53.519502   58676 kubeadm.go:581] duration metric: took 5.897944072s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1101 01:06:53.519525   58676 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:06:53.705460   58676 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:06:53.705490   58676 node_conditions.go:123] node cpu capacity is 2
	I1101 01:06:53.705501   58676 node_conditions.go:105] duration metric: took 185.970851ms to run NodePressure ...
	I1101 01:06:53.705515   58676 start.go:228] waiting for startup goroutines ...
	I1101 01:06:53.705523   58676 start.go:233] waiting for cluster config update ...
	I1101 01:06:53.705537   58676 start.go:242] writing updated cluster config ...
	I1101 01:06:53.705824   58676 ssh_runner.go:195] Run: rm -f paused
	I1101 01:06:53.758508   58676 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1101 01:06:53.761998   58676 out.go:177] * Done! kubectl is now configured to use "no-preload-008483" cluster and "default" namespace by default
	I1101 01:06:55.226416   58823 system_pods.go:86] 5 kube-system pods found
	I1101 01:06:55.226443   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:55.226449   58823 system_pods.go:89] "kube-apiserver-old-k8s-version-330042" [1d813832-7c56-439f-aee9-c5c326e6cd3d] Pending
	I1101 01:06:55.226453   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:55.226460   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:55.226466   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:55.226480   58823 retry.go:31] will retry after 6.901184226s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:07:02.133379   58823 system_pods.go:86] 5 kube-system pods found
	I1101 01:07:02.133412   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:07:02.133421   58823 system_pods.go:89] "kube-apiserver-old-k8s-version-330042" [1d813832-7c56-439f-aee9-c5c326e6cd3d] Running
	I1101 01:07:02.133427   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:07:02.133442   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:07:02.133451   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:07:02.133471   58823 retry.go:31] will retry after 10.272464072s: missing components: etcd, kube-controller-manager, kube-scheduler
	I1101 01:07:12.412133   58823 system_pods.go:86] 5 kube-system pods found
	I1101 01:07:12.412166   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:07:12.412175   58823 system_pods.go:89] "kube-apiserver-old-k8s-version-330042" [1d813832-7c56-439f-aee9-c5c326e6cd3d] Running
	I1101 01:07:12.412181   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:07:12.412193   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:07:12.412202   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:07:12.412221   58823 retry.go:31] will retry after 11.290918588s: missing components: etcd, kube-controller-manager, kube-scheduler
	I1101 01:07:23.709462   58823 system_pods.go:86] 8 kube-system pods found
	I1101 01:07:23.709495   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:07:23.709503   58823 system_pods.go:89] "etcd-old-k8s-version-330042" [fc62fe53-9611-4b3d-9dca-a30d58618b2b] Running
	I1101 01:07:23.709510   58823 system_pods.go:89] "kube-apiserver-old-k8s-version-330042" [1d813832-7c56-439f-aee9-c5c326e6cd3d] Running
	I1101 01:07:23.709517   58823 system_pods.go:89] "kube-controller-manager-old-k8s-version-330042" [8ad0ccf9-fa8e-4205-b89c-f5f57cb7be6e] Running
	I1101 01:07:23.709524   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:07:23.709528   58823 system_pods.go:89] "kube-scheduler-old-k8s-version-330042" [2b077f6b-8077-4ccb-93c2-c6d3383b1113] Pending
	I1101 01:07:23.709534   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:07:23.709543   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:07:23.709559   58823 retry.go:31] will retry after 12.900513481s: missing components: kube-scheduler
	I1101 01:07:36.615720   58823 system_pods.go:86] 8 kube-system pods found
	I1101 01:07:36.615746   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:07:36.615751   58823 system_pods.go:89] "etcd-old-k8s-version-330042" [fc62fe53-9611-4b3d-9dca-a30d58618b2b] Running
	I1101 01:07:36.615756   58823 system_pods.go:89] "kube-apiserver-old-k8s-version-330042" [1d813832-7c56-439f-aee9-c5c326e6cd3d] Running
	I1101 01:07:36.615760   58823 system_pods.go:89] "kube-controller-manager-old-k8s-version-330042" [8ad0ccf9-fa8e-4205-b89c-f5f57cb7be6e] Running
	I1101 01:07:36.615763   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:07:36.615767   58823 system_pods.go:89] "kube-scheduler-old-k8s-version-330042" [2b077f6b-8077-4ccb-93c2-c6d3383b1113] Running
	I1101 01:07:36.615774   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:07:36.615780   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:07:36.615787   58823 system_pods.go:126] duration metric: took 1m10.123228938s to wait for k8s-apps to be running ...
	I1101 01:07:36.615793   58823 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 01:07:36.615837   58823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:07:36.634354   58823 system_svc.go:56] duration metric: took 18.547208ms WaitForService to wait for kubelet.
	I1101 01:07:36.634387   58823 kubeadm.go:581] duration metric: took 1m19.896166299s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1101 01:07:36.634412   58823 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:07:36.638286   58823 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:07:36.638315   58823 node_conditions.go:123] node cpu capacity is 2
	I1101 01:07:36.638329   58823 node_conditions.go:105] duration metric: took 3.911826ms to run NodePressure ...
	I1101 01:07:36.638344   58823 start.go:228] waiting for startup goroutines ...
	I1101 01:07:36.638351   58823 start.go:233] waiting for cluster config update ...
	I1101 01:07:36.638365   58823 start.go:242] writing updated cluster config ...
	I1101 01:07:36.638658   58823 ssh_runner.go:195] Run: rm -f paused
	I1101 01:07:36.688409   58823 start.go:600] kubectl: 1.28.3, cluster: 1.16.0 (minor skew: 12)
	I1101 01:07:36.690520   58823 out.go:177] 
	W1101 01:07:36.692006   58823 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.16.0.
	I1101 01:07:36.693512   58823 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1101 01:07:36.694940   58823 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-330042" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-11-01 01:01:08 UTC, ends at Wed 2023-11-01 01:15:55 UTC. --
	Nov 01 01:15:55 no-preload-008483 crio[709]: time="2023-11-01 01:15:55.447421769Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698801355447408192,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=dbe1f3b5-8028-490f-8649-56ec9a7a0b78 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:15:55 no-preload-008483 crio[709]: time="2023-11-01 01:15:55.448042748Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1468bbec-eedb-489c-85ed-bd59dd9cbf9a name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:15:55 no-preload-008483 crio[709]: time="2023-11-01 01:15:55.448113138Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1468bbec-eedb-489c-85ed-bd59dd9cbf9a name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:15:55 no-preload-008483 crio[709]: time="2023-11-01 01:15:55.448322814Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e91e26d41e22a636f3966fdb1dd6db999eae4ea6cff3e1290036854c8960f051,PodSandboxId:195d8157304c1005ac61e4f188e7c5240de832d9e80aff752fbf253770b0622a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1698800811030815361,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 909163da-9021-4cee-9a72-1bc9b6ae9390,},Annotations:map[string]string{io.kubernetes.container.hash: 1e44b7d8,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edd94ede27bbbe8cfe0647252d6bed169e64b894a76d5a29893d784dc05f519b,PodSandboxId:1e577e533c6773fe74f90f9960a1e296e7b3d9f2168345a6deecf8dbe94cb97c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1698800811087201369,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4cx5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57c1e87a-aa14-440d-9001-a6ba2ab7c8c6,},Annotations:map[string]string{io.kubernetes.container.hash: 510a8192,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73703bcd827a65c00d49d0b850c3eae382a733d0d82a35a7b6f0540825dcf58,PodSandboxId:b3d442321510a7263cd825e67380d31225427312783addaa6b0e07c26484866d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1698800810274125365,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-m8v7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 351a9458-075b-40d1-96d1-86a450a99251,},Annotations:map[string]string{io.kubernetes.container.hash: 82d873be,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1ba3a1083dd2dcb1278523bde1f5387fb968eeba4196562c8bf480c69743a4a,PodSandboxId:3c971c491c8b9e730b6fd26723ec0ca29ef412e4df345a6cbca3317e6bdb84b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1698800787679889858,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-008483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8aa05c6e537fd3a0f101e32fb442ce36,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ae2965afd64df8be7fbc531d17512e8c69ea84d779fdb1bb8dda8a305cbc0ff,PodSandboxId:8f18c30c727e68b06fec8778b482096e772177c5747ac86ff5da1828206108ca,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1698800787610776578,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-008483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3605feb9b1e84ca198f01f1457eb52,},Annotations:map
[string]string{io.kubernetes.container.hash: ce0a95cd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b382513a898be97c48e1ae6d9ba0083e519d059d2e5161e8d91c119e828b9535,PodSandboxId:7512f2f28d7dd29242b39028586b60a61c9522a2e810f28949a5174bb67230a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1698800787440816689,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-008483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e5044dcdf76b056f4fa816fd
0dda7c1,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4de02e7339a911abc7905e3d5b90216f4a37571e0ffcb1411f51374a244ef3fe,PodSandboxId:7b470162184f3df0fb98378ca579fceaaf24b754964960b2a9ff1d127612a437,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1698800787366164936,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-008483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ebfcbba23e72624e12f49fd78f84e46,},A
nnotations:map[string]string{io.kubernetes.container.hash: 7d6e5ab1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1468bbec-eedb-489c-85ed-bd59dd9cbf9a name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:15:55 no-preload-008483 crio[709]: time="2023-11-01 01:15:55.490741639Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=49d3d48b-ba08-4928-aa7d-9094e1d1611d name=/runtime.v1.RuntimeService/Version
	Nov 01 01:15:55 no-preload-008483 crio[709]: time="2023-11-01 01:15:55.490838391Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=49d3d48b-ba08-4928-aa7d-9094e1d1611d name=/runtime.v1.RuntimeService/Version
	Nov 01 01:15:55 no-preload-008483 crio[709]: time="2023-11-01 01:15:55.492535548Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=718375c0-dce1-4642-a49b-4eb5a6d53ec6 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:15:55 no-preload-008483 crio[709]: time="2023-11-01 01:15:55.492857881Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698801355492845789,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=718375c0-dce1-4642-a49b-4eb5a6d53ec6 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:15:55 no-preload-008483 crio[709]: time="2023-11-01 01:15:55.493615145Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a65debcd-a22f-4b6f-9f32-1781eca9e945 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:15:55 no-preload-008483 crio[709]: time="2023-11-01 01:15:55.493683403Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a65debcd-a22f-4b6f-9f32-1781eca9e945 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:15:55 no-preload-008483 crio[709]: time="2023-11-01 01:15:55.493853972Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e91e26d41e22a636f3966fdb1dd6db999eae4ea6cff3e1290036854c8960f051,PodSandboxId:195d8157304c1005ac61e4f188e7c5240de832d9e80aff752fbf253770b0622a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1698800811030815361,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 909163da-9021-4cee-9a72-1bc9b6ae9390,},Annotations:map[string]string{io.kubernetes.container.hash: 1e44b7d8,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edd94ede27bbbe8cfe0647252d6bed169e64b894a76d5a29893d784dc05f519b,PodSandboxId:1e577e533c6773fe74f90f9960a1e296e7b3d9f2168345a6deecf8dbe94cb97c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1698800811087201369,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4cx5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57c1e87a-aa14-440d-9001-a6ba2ab7c8c6,},Annotations:map[string]string{io.kubernetes.container.hash: 510a8192,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73703bcd827a65c00d49d0b850c3eae382a733d0d82a35a7b6f0540825dcf58,PodSandboxId:b3d442321510a7263cd825e67380d31225427312783addaa6b0e07c26484866d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1698800810274125365,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-m8v7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 351a9458-075b-40d1-96d1-86a450a99251,},Annotations:map[string]string{io.kubernetes.container.hash: 82d873be,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1ba3a1083dd2dcb1278523bde1f5387fb968eeba4196562c8bf480c69743a4a,PodSandboxId:3c971c491c8b9e730b6fd26723ec0ca29ef412e4df345a6cbca3317e6bdb84b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1698800787679889858,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-008483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8aa05c6e537fd3a0f101e32fb442ce36,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ae2965afd64df8be7fbc531d17512e8c69ea84d779fdb1bb8dda8a305cbc0ff,PodSandboxId:8f18c30c727e68b06fec8778b482096e772177c5747ac86ff5da1828206108ca,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1698800787610776578,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-008483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3605feb9b1e84ca198f01f1457eb52,},Annotations:map
[string]string{io.kubernetes.container.hash: ce0a95cd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b382513a898be97c48e1ae6d9ba0083e519d059d2e5161e8d91c119e828b9535,PodSandboxId:7512f2f28d7dd29242b39028586b60a61c9522a2e810f28949a5174bb67230a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1698800787440816689,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-008483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e5044dcdf76b056f4fa816fd
0dda7c1,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4de02e7339a911abc7905e3d5b90216f4a37571e0ffcb1411f51374a244ef3fe,PodSandboxId:7b470162184f3df0fb98378ca579fceaaf24b754964960b2a9ff1d127612a437,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1698800787366164936,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-008483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ebfcbba23e72624e12f49fd78f84e46,},A
nnotations:map[string]string{io.kubernetes.container.hash: 7d6e5ab1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a65debcd-a22f-4b6f-9f32-1781eca9e945 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:15:55 no-preload-008483 crio[709]: time="2023-11-01 01:15:55.538013415Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=77dabefe-f0bc-42ab-b0ef-699956ff799b name=/runtime.v1.RuntimeService/Version
	Nov 01 01:15:55 no-preload-008483 crio[709]: time="2023-11-01 01:15:55.538096845Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=77dabefe-f0bc-42ab-b0ef-699956ff799b name=/runtime.v1.RuntimeService/Version
	Nov 01 01:15:55 no-preload-008483 crio[709]: time="2023-11-01 01:15:55.539466079Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=82082b2e-4b61-4f34-b8ac-8b2b5de258b6 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:15:55 no-preload-008483 crio[709]: time="2023-11-01 01:15:55.539848063Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698801355539833985,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=82082b2e-4b61-4f34-b8ac-8b2b5de258b6 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:15:55 no-preload-008483 crio[709]: time="2023-11-01 01:15:55.540465456Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1fe364c5-9793-44ea-aeb5-2467879ba53e name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:15:55 no-preload-008483 crio[709]: time="2023-11-01 01:15:55.540535756Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1fe364c5-9793-44ea-aeb5-2467879ba53e name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:15:55 no-preload-008483 crio[709]: time="2023-11-01 01:15:55.540683886Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e91e26d41e22a636f3966fdb1dd6db999eae4ea6cff3e1290036854c8960f051,PodSandboxId:195d8157304c1005ac61e4f188e7c5240de832d9e80aff752fbf253770b0622a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1698800811030815361,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 909163da-9021-4cee-9a72-1bc9b6ae9390,},Annotations:map[string]string{io.kubernetes.container.hash: 1e44b7d8,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edd94ede27bbbe8cfe0647252d6bed169e64b894a76d5a29893d784dc05f519b,PodSandboxId:1e577e533c6773fe74f90f9960a1e296e7b3d9f2168345a6deecf8dbe94cb97c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1698800811087201369,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4cx5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57c1e87a-aa14-440d-9001-a6ba2ab7c8c6,},Annotations:map[string]string{io.kubernetes.container.hash: 510a8192,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73703bcd827a65c00d49d0b850c3eae382a733d0d82a35a7b6f0540825dcf58,PodSandboxId:b3d442321510a7263cd825e67380d31225427312783addaa6b0e07c26484866d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1698800810274125365,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-m8v7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 351a9458-075b-40d1-96d1-86a450a99251,},Annotations:map[string]string{io.kubernetes.container.hash: 82d873be,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1ba3a1083dd2dcb1278523bde1f5387fb968eeba4196562c8bf480c69743a4a,PodSandboxId:3c971c491c8b9e730b6fd26723ec0ca29ef412e4df345a6cbca3317e6bdb84b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1698800787679889858,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-008483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8aa05c6e537fd3a0f101e32fb442ce36,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ae2965afd64df8be7fbc531d17512e8c69ea84d779fdb1bb8dda8a305cbc0ff,PodSandboxId:8f18c30c727e68b06fec8778b482096e772177c5747ac86ff5da1828206108ca,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1698800787610776578,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-008483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3605feb9b1e84ca198f01f1457eb52,},Annotations:map
[string]string{io.kubernetes.container.hash: ce0a95cd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b382513a898be97c48e1ae6d9ba0083e519d059d2e5161e8d91c119e828b9535,PodSandboxId:7512f2f28d7dd29242b39028586b60a61c9522a2e810f28949a5174bb67230a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1698800787440816689,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-008483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e5044dcdf76b056f4fa816fd
0dda7c1,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4de02e7339a911abc7905e3d5b90216f4a37571e0ffcb1411f51374a244ef3fe,PodSandboxId:7b470162184f3df0fb98378ca579fceaaf24b754964960b2a9ff1d127612a437,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1698800787366164936,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-008483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ebfcbba23e72624e12f49fd78f84e46,},A
nnotations:map[string]string{io.kubernetes.container.hash: 7d6e5ab1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1fe364c5-9793-44ea-aeb5-2467879ba53e name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:15:55 no-preload-008483 crio[709]: time="2023-11-01 01:15:55.574720844Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=70a0c969-2f9b-4f33-9975-ee703c070705 name=/runtime.v1.RuntimeService/Version
	Nov 01 01:15:55 no-preload-008483 crio[709]: time="2023-11-01 01:15:55.574806477Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=70a0c969-2f9b-4f33-9975-ee703c070705 name=/runtime.v1.RuntimeService/Version
	Nov 01 01:15:55 no-preload-008483 crio[709]: time="2023-11-01 01:15:55.576360580Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=770b766f-09fa-484a-84fe-b2ad76498dea name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:15:55 no-preload-008483 crio[709]: time="2023-11-01 01:15:55.576694818Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698801355576681934,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=770b766f-09fa-484a-84fe-b2ad76498dea name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:15:55 no-preload-008483 crio[709]: time="2023-11-01 01:15:55.577473133Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4b71b8aa-2ce1-4c1c-959a-a80a47714cb7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:15:55 no-preload-008483 crio[709]: time="2023-11-01 01:15:55.577535816Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4b71b8aa-2ce1-4c1c-959a-a80a47714cb7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:15:55 no-preload-008483 crio[709]: time="2023-11-01 01:15:55.577737101Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e91e26d41e22a636f3966fdb1dd6db999eae4ea6cff3e1290036854c8960f051,PodSandboxId:195d8157304c1005ac61e4f188e7c5240de832d9e80aff752fbf253770b0622a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1698800811030815361,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 909163da-9021-4cee-9a72-1bc9b6ae9390,},Annotations:map[string]string{io.kubernetes.container.hash: 1e44b7d8,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edd94ede27bbbe8cfe0647252d6bed169e64b894a76d5a29893d784dc05f519b,PodSandboxId:1e577e533c6773fe74f90f9960a1e296e7b3d9f2168345a6deecf8dbe94cb97c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1698800811087201369,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4cx5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57c1e87a-aa14-440d-9001-a6ba2ab7c8c6,},Annotations:map[string]string{io.kubernetes.container.hash: 510a8192,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73703bcd827a65c00d49d0b850c3eae382a733d0d82a35a7b6f0540825dcf58,PodSandboxId:b3d442321510a7263cd825e67380d31225427312783addaa6b0e07c26484866d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1698800810274125365,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-m8v7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 351a9458-075b-40d1-96d1-86a450a99251,},Annotations:map[string]string{io.kubernetes.container.hash: 82d873be,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1ba3a1083dd2dcb1278523bde1f5387fb968eeba4196562c8bf480c69743a4a,PodSandboxId:3c971c491c8b9e730b6fd26723ec0ca29ef412e4df345a6cbca3317e6bdb84b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1698800787679889858,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-008483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8aa05c6e537fd3a0f101e32fb442ce36,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ae2965afd64df8be7fbc531d17512e8c69ea84d779fdb1bb8dda8a305cbc0ff,PodSandboxId:8f18c30c727e68b06fec8778b482096e772177c5747ac86ff5da1828206108ca,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1698800787610776578,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-008483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3605feb9b1e84ca198f01f1457eb52,},Annotations:map
[string]string{io.kubernetes.container.hash: ce0a95cd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b382513a898be97c48e1ae6d9ba0083e519d059d2e5161e8d91c119e828b9535,PodSandboxId:7512f2f28d7dd29242b39028586b60a61c9522a2e810f28949a5174bb67230a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1698800787440816689,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-008483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e5044dcdf76b056f4fa816fd
0dda7c1,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4de02e7339a911abc7905e3d5b90216f4a37571e0ffcb1411f51374a244ef3fe,PodSandboxId:7b470162184f3df0fb98378ca579fceaaf24b754964960b2a9ff1d127612a437,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1698800787366164936,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-008483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ebfcbba23e72624e12f49fd78f84e46,},A
nnotations:map[string]string{io.kubernetes.container.hash: 7d6e5ab1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4b71b8aa-2ce1-4c1c-959a-a80a47714cb7 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	edd94ede27bbb       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   9 minutes ago       Running             kube-proxy                0                   1e577e533c677       kube-proxy-4cx5t
	e91e26d41e22a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   195d8157304c1       storage-provisioner
	d73703bcd827a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   b3d442321510a       coredns-5dd5756b68-m8v7v
	c1ba3a1083dd2       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   9 minutes ago       Running             kube-scheduler            2                   3c971c491c8b9       kube-scheduler-no-preload-008483
	0ae2965afd64d       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   8f18c30c727e6       etcd-no-preload-008483
	b382513a898be       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   9 minutes ago       Running             kube-controller-manager   2                   7512f2f28d7dd       kube-controller-manager-no-preload-008483
	4de02e7339a91       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   9 minutes ago       Running             kube-apiserver            2                   7b470162184f3       kube-apiserver-no-preload-008483
	
	* 
	* ==> coredns [d73703bcd827a65c00d49d0b850c3eae382a733d0d82a35a7b6f0540825dcf58] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-008483
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-008483
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9
	                    minikube.k8s.io/name=no-preload-008483
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_01T01_06_35_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Nov 2023 01:06:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-008483
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Nov 2023 01:15:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Nov 2023 01:12:01 +0000   Wed, 01 Nov 2023 01:06:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Nov 2023 01:12:01 +0000   Wed, 01 Nov 2023 01:06:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Nov 2023 01:12:01 +0000   Wed, 01 Nov 2023 01:06:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Nov 2023 01:12:01 +0000   Wed, 01 Nov 2023 01:06:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.140
	  Hostname:    no-preload-008483
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 52a9be0c57874f02a466e826841cfdf7
	  System UUID:                52a9be0c-5787-4f02-a466-e826841cfdf7
	  Boot ID:                    b0555844-9b75-4cdf-be6c-0809731b47c2
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-m8v7v                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m8s
	  kube-system                 etcd-no-preload-008483                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-apiserver-no-preload-008483             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-controller-manager-no-preload-008483    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-proxy-4cx5t                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	  kube-system                 kube-scheduler-no-preload-008483             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 metrics-server-57f55c9bc5-qcxt7              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m6s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m4s   kube-proxy       
	  Normal  Starting                 9m20s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m20s  kubelet          Node no-preload-008483 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s  kubelet          Node no-preload-008483 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s  kubelet          Node no-preload-008483 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m20s  kubelet          Node no-preload-008483 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m20s  kubelet          Node no-preload-008483 status is now: NodeReady
	  Normal  RegisteredNode           9m8s   node-controller  Node no-preload-008483 event: Registered Node no-preload-008483 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov 1 01:01] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068695] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.795198] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.991614] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.142829] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.737636] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.814081] systemd-fstab-generator[633]: Ignoring "noauto" for root device
	[  +0.131430] systemd-fstab-generator[644]: Ignoring "noauto" for root device
	[  +0.154693] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.110957] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.223749] systemd-fstab-generator[693]: Ignoring "noauto" for root device
	[ +31.732735] systemd-fstab-generator[1268]: Ignoring "noauto" for root device
	[Nov 1 01:02] kauditd_printk_skb: 29 callbacks suppressed
	[Nov 1 01:06] systemd-fstab-generator[3871]: Ignoring "noauto" for root device
	[  +9.293562] systemd-fstab-generator[4197]: Ignoring "noauto" for root device
	[ +13.745901] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [0ae2965afd64df8be7fbc531d17512e8c69ea84d779fdb1bb8dda8a305cbc0ff] <==
	* {"level":"info","ts":"2023-11-01T01:06:29.20338Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"85ea5ca067fb3fe3 switched to configuration voters=(9649626995603750883)"}
	{"level":"info","ts":"2023-11-01T01:06:29.203454Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"77a8f052fa5fccd4","local-member-id":"85ea5ca067fb3fe3","added-peer-id":"85ea5ca067fb3fe3","added-peer-peer-urls":["https://192.168.50.140:2380"]}
	{"level":"info","ts":"2023-11-01T01:06:29.206064Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-01T01:06:29.206373Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"85ea5ca067fb3fe3","initial-advertise-peer-urls":["https://192.168.50.140:2380"],"listen-peer-urls":["https://192.168.50.140:2380"],"advertise-client-urls":["https://192.168.50.140:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.140:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-01T01:06:29.206469Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-01T01:06:29.206631Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.140:2380"}
	{"level":"info","ts":"2023-11-01T01:06:29.206698Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.140:2380"}
	{"level":"info","ts":"2023-11-01T01:06:29.726842Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"85ea5ca067fb3fe3 is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-01T01:06:29.726997Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"85ea5ca067fb3fe3 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-01T01:06:29.727068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"85ea5ca067fb3fe3 received MsgPreVoteResp from 85ea5ca067fb3fe3 at term 1"}
	{"level":"info","ts":"2023-11-01T01:06:29.72709Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"85ea5ca067fb3fe3 became candidate at term 2"}
	{"level":"info","ts":"2023-11-01T01:06:29.727099Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"85ea5ca067fb3fe3 received MsgVoteResp from 85ea5ca067fb3fe3 at term 2"}
	{"level":"info","ts":"2023-11-01T01:06:29.727117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"85ea5ca067fb3fe3 became leader at term 2"}
	{"level":"info","ts":"2023-11-01T01:06:29.727137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 85ea5ca067fb3fe3 elected leader 85ea5ca067fb3fe3 at term 2"}
	{"level":"info","ts":"2023-11-01T01:06:29.730578Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"85ea5ca067fb3fe3","local-member-attributes":"{Name:no-preload-008483 ClientURLs:[https://192.168.50.140:2379]}","request-path":"/0/members/85ea5ca067fb3fe3/attributes","cluster-id":"77a8f052fa5fccd4","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-01T01:06:29.73072Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-01T01:06:29.731705Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-01T01:06:29.731816Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T01:06:29.732043Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-01T01:06:29.732846Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.140:2379"}
	{"level":"info","ts":"2023-11-01T01:06:29.733579Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-01T01:06:29.733594Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-01T01:06:29.736022Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"77a8f052fa5fccd4","local-member-id":"85ea5ca067fb3fe3","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T01:06:29.736265Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T01:06:29.740598Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  01:15:55 up 14 min,  0 users,  load average: 0.50, 0.28, 0.20
	Linux no-preload-008483 5.10.57 #1 SMP Tue Oct 31 22:14:31 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [4de02e7339a911abc7905e3d5b90216f4a37571e0ffcb1411f51374a244ef3fe] <==
	* W1101 01:11:32.602589       1 handler_proxy.go:93] no RequestInfo found in the context
	E1101 01:11:32.602654       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1101 01:11:32.602665       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1101 01:11:32.602758       1 handler_proxy.go:93] no RequestInfo found in the context
	E1101 01:11:32.602866       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1101 01:11:32.603825       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1101 01:12:31.439462       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1101 01:12:32.603614       1 handler_proxy.go:93] no RequestInfo found in the context
	E1101 01:12:32.603681       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1101 01:12:32.603690       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1101 01:12:32.604985       1 handler_proxy.go:93] no RequestInfo found in the context
	E1101 01:12:32.605111       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1101 01:12:32.605124       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1101 01:13:31.439629       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1101 01:14:31.439657       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1101 01:14:32.604538       1 handler_proxy.go:93] no RequestInfo found in the context
	E1101 01:14:32.604602       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1101 01:14:32.604615       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1101 01:14:32.605842       1 handler_proxy.go:93] no RequestInfo found in the context
	E1101 01:14:32.606030       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1101 01:14:32.606073       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1101 01:15:31.439579       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [b382513a898be97c48e1ae6d9ba0083e519d059d2e5161e8d91c119e828b9535] <==
	* I1101 01:10:17.842508       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:10:47.385970       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:10:47.851639       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:11:17.392044       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:11:17.864011       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:11:47.398674       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:11:47.872656       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:12:17.406695       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:12:17.882531       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1101 01:12:37.380292       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="366.367µs"
	E1101 01:12:47.414580       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:12:47.891797       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1101 01:12:50.379199       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="129.081µs"
	E1101 01:13:17.421812       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:13:17.900568       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:13:47.431512       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:13:47.910729       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:14:17.438033       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:14:17.924857       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:14:47.443779       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:14:47.936019       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:15:17.450044       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:15:17.945767       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:15:47.456807       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:15:47.956752       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [edd94ede27bbbe8cfe0647252d6bed169e64b894a76d5a29893d784dc05f519b] <==
	* I1101 01:06:51.425719       1 server_others.go:69] "Using iptables proxy"
	I1101 01:06:51.436786       1 node.go:141] Successfully retrieved node IP: 192.168.50.140
	I1101 01:06:51.479431       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1101 01:06:51.479537       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 01:06:51.483017       1 server_others.go:152] "Using iptables Proxier"
	I1101 01:06:51.483101       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 01:06:51.483400       1 server.go:846] "Version info" version="v1.28.3"
	I1101 01:06:51.483441       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 01:06:51.484746       1 config.go:188] "Starting service config controller"
	I1101 01:06:51.484827       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 01:06:51.484855       1 config.go:97] "Starting endpoint slice config controller"
	I1101 01:06:51.484859       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 01:06:51.487064       1 config.go:315] "Starting node config controller"
	I1101 01:06:51.487206       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 01:06:51.586033       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1101 01:06:51.590145       1 shared_informer.go:318] Caches are synced for service config
	I1101 01:06:51.590176       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [c1ba3a1083dd2dcb1278523bde1f5387fb968eeba4196562c8bf480c69743a4a] <==
	* W1101 01:06:31.662677       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1101 01:06:31.662737       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 01:06:31.662907       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1101 01:06:31.664014       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1101 01:06:31.664383       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1101 01:06:31.664446       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1101 01:06:31.664515       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1101 01:06:31.664594       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1101 01:06:31.664395       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1101 01:06:31.664665       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1101 01:06:32.488136       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1101 01:06:32.488233       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 01:06:32.505675       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1101 01:06:32.505728       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1101 01:06:32.532772       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1101 01:06:32.532797       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1101 01:06:32.710884       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1101 01:06:32.710992       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1101 01:06:32.728153       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1101 01:06:32.728398       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1101 01:06:32.755735       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1101 01:06:32.755783       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1101 01:06:32.881325       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1101 01:06:32.881391       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1101 01:06:34.952471       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-11-01 01:01:08 UTC, ends at Wed 2023-11-01 01:15:56 UTC. --
	Nov 01 01:13:03 no-preload-008483 kubelet[4204]: E1101 01:13:03.362699    4204 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qcxt7" podUID="bf444b92-dd54-43fc-a9a8-0e9000b562e3"
	Nov 01 01:13:18 no-preload-008483 kubelet[4204]: E1101 01:13:18.363068    4204 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qcxt7" podUID="bf444b92-dd54-43fc-a9a8-0e9000b562e3"
	Nov 01 01:13:33 no-preload-008483 kubelet[4204]: E1101 01:13:33.362608    4204 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qcxt7" podUID="bf444b92-dd54-43fc-a9a8-0e9000b562e3"
	Nov 01 01:13:35 no-preload-008483 kubelet[4204]: E1101 01:13:35.503440    4204 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 01 01:13:35 no-preload-008483 kubelet[4204]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 01 01:13:35 no-preload-008483 kubelet[4204]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 01 01:13:35 no-preload-008483 kubelet[4204]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 01 01:13:44 no-preload-008483 kubelet[4204]: E1101 01:13:44.363260    4204 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qcxt7" podUID="bf444b92-dd54-43fc-a9a8-0e9000b562e3"
	Nov 01 01:13:56 no-preload-008483 kubelet[4204]: E1101 01:13:56.363213    4204 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qcxt7" podUID="bf444b92-dd54-43fc-a9a8-0e9000b562e3"
	Nov 01 01:14:07 no-preload-008483 kubelet[4204]: E1101 01:14:07.361980    4204 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qcxt7" podUID="bf444b92-dd54-43fc-a9a8-0e9000b562e3"
	Nov 01 01:14:19 no-preload-008483 kubelet[4204]: E1101 01:14:19.362763    4204 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qcxt7" podUID="bf444b92-dd54-43fc-a9a8-0e9000b562e3"
	Nov 01 01:14:30 no-preload-008483 kubelet[4204]: E1101 01:14:30.362210    4204 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qcxt7" podUID="bf444b92-dd54-43fc-a9a8-0e9000b562e3"
	Nov 01 01:14:35 no-preload-008483 kubelet[4204]: E1101 01:14:35.504588    4204 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 01 01:14:35 no-preload-008483 kubelet[4204]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 01 01:14:35 no-preload-008483 kubelet[4204]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 01 01:14:35 no-preload-008483 kubelet[4204]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 01 01:14:44 no-preload-008483 kubelet[4204]: E1101 01:14:44.362495    4204 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qcxt7" podUID="bf444b92-dd54-43fc-a9a8-0e9000b562e3"
	Nov 01 01:14:59 no-preload-008483 kubelet[4204]: E1101 01:14:59.363596    4204 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qcxt7" podUID="bf444b92-dd54-43fc-a9a8-0e9000b562e3"
	Nov 01 01:15:14 no-preload-008483 kubelet[4204]: E1101 01:15:14.363274    4204 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qcxt7" podUID="bf444b92-dd54-43fc-a9a8-0e9000b562e3"
	Nov 01 01:15:28 no-preload-008483 kubelet[4204]: E1101 01:15:28.363275    4204 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qcxt7" podUID="bf444b92-dd54-43fc-a9a8-0e9000b562e3"
	Nov 01 01:15:35 no-preload-008483 kubelet[4204]: E1101 01:15:35.502539    4204 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 01 01:15:35 no-preload-008483 kubelet[4204]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 01 01:15:35 no-preload-008483 kubelet[4204]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 01 01:15:35 no-preload-008483 kubelet[4204]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 01 01:15:41 no-preload-008483 kubelet[4204]: E1101 01:15:41.365083    4204 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qcxt7" podUID="bf444b92-dd54-43fc-a9a8-0e9000b562e3"
	
	* 
	* ==> storage-provisioner [e91e26d41e22a636f3966fdb1dd6db999eae4ea6cff3e1290036854c8960f051] <==
	* I1101 01:06:51.323773       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 01:06:51.351818       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 01:06:51.351989       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1101 01:06:51.367052       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 01:06:51.369588       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-008483_71f990fe-abf4-4bd5-b75e-38511119a99b!
	I1101 01:06:51.370831       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cae153b0-0fc0-420c-8f0e-867709ef7140", APIVersion:"v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-008483_71f990fe-abf4-4bd5-b75e-38511119a99b became leader
	I1101 01:06:51.471073       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-008483_71f990fe-abf4-4bd5-b75e-38511119a99b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-008483 -n no-preload-008483
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-008483 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-qcxt7
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-008483 describe pod metrics-server-57f55c9bc5-qcxt7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-008483 describe pod metrics-server-57f55c9bc5-qcxt7: exit status 1 (72.028141ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-qcxt7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-008483 describe pod metrics-server-57f55c9bc5-qcxt7: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1101 01:07:37.168206   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/auto-090856/client.crt: no such file or directory
E1101 01:07:43.438020   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/kindnet-090856/client.crt: no such file or directory
E1101 01:08:02.504548   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt: no such file or directory
E1101 01:08:32.447720   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/calico-090856/client.crt: no such file or directory
E1101 01:08:39.053732   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/functional-736766/client.crt: no such file or directory
E1101 01:09:06.482824   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/kindnet-090856/client.crt: no such file or directory
E1101 01:09:51.059805   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/custom-flannel-090856/client.crt: no such file or directory
E1101 01:09:55.492234   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/calico-090856/client.crt: no such file or directory
E1101 01:10:30.317456   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/enable-default-cni-090856/client.crt: no such file or directory
E1101 01:10:35.091910   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt: no such file or directory
E1101 01:10:47.920836   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/flannel-090856/client.crt: no such file or directory
E1101 01:10:52.798446   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/bridge-090856/client.crt: no such file or directory
E1101 01:11:14.103175   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/custom-flannel-090856/client.crt: no such file or directory
E1101 01:11:14.122427   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/auto-090856/client.crt: no such file or directory
E1101 01:11:53.362807   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/enable-default-cni-090856/client.crt: no such file or directory
E1101 01:11:58.140301   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt: no such file or directory
E1101 01:12:10.964920   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/flannel-090856/client.crt: no such file or directory
E1101 01:12:15.843188   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/bridge-090856/client.crt: no such file or directory
E1101 01:12:16.006519   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/functional-736766/client.crt: no such file or directory
E1101 01:12:43.437699   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/kindnet-090856/client.crt: no such file or directory
E1101 01:13:02.504935   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt: no such file or directory
E1101 01:13:32.448471   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/calico-090856/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-330042 -n old-k8s-version-330042
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-11-01 01:16:37.291704215 +0000 UTC m=+5570.856286209
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-330042 -n old-k8s-version-330042
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-330042 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-330042 logs -n 25: (1.551184373s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p flannel-090856 sudo                                 | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | containerd config dump                                 |                              |         |                |                     |                     |
	| ssh     | -p flannel-090856 sudo                                 | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | systemctl status crio --all                            |                              |         |                |                     |                     |
	|         | --full --no-pager                                      |                              |         |                |                     |                     |
	| ssh     | -p flannel-090856 sudo                                 | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |                |                     |                     |
	| start   | -p embed-certs-754132                                  | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:52 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| ssh     | -p flannel-090856 sudo find                            | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |                |                     |                     |
	| ssh     | -p flannel-090856 sudo crio                            | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | config                                                 |                              |         |                |                     |                     |
	| delete  | -p flannel-090856                                      | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	| delete  | -p                                                     | disable-driver-mounts-130996 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | disable-driver-mounts-130996                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:53 UTC |
	|         | default-k8s-diff-port-639310                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-008483             | no-preload-008483            | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC | 01 Nov 23 00:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-008483                                   | no-preload-008483            | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-754132            | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC | 01 Nov 23 00:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-754132                                  | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-330042        | old-k8s-version-330042       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC | 01 Nov 23 00:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-330042                              | old-k8s-version-330042       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-639310  | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:53 UTC | 01 Nov 23 00:53 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:53 UTC |                     |
	|         | default-k8s-diff-port-639310                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-008483                  | no-preload-008483            | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-754132                 | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-008483                                   | no-preload-008483            | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC | 01 Nov 23 01:06 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| start   | -p embed-certs-754132                                  | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC | 01 Nov 23 01:05 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-330042             | old-k8s-version-330042       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-330042                              | old-k8s-version-330042       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC | 01 Nov 23 01:07 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-639310       | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:56 UTC | 01 Nov 23 01:06 UTC |
	|         | default-k8s-diff-port-639310                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/01 00:56:25
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 00:56:25.029853   59148 out.go:296] Setting OutFile to fd 1 ...
	I1101 00:56:25.030119   59148 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:56:25.030128   59148 out.go:309] Setting ErrFile to fd 2...
	I1101 00:56:25.030133   59148 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:56:25.030311   59148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7305/.minikube/bin
	I1101 00:56:25.030856   59148 out.go:303] Setting JSON to false
	I1101 00:56:25.031741   59148 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5930,"bootTime":1698794255,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 00:56:25.031805   59148 start.go:138] virtualization: kvm guest
	I1101 00:56:25.034341   59148 out.go:177] * [default-k8s-diff-port-639310] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1101 00:56:25.036261   59148 out.go:177]   - MINIKUBE_LOCATION=17486
	I1101 00:56:25.037829   59148 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 00:56:25.036294   59148 notify.go:220] Checking for updates...
	I1101 00:56:25.041068   59148 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 00:56:25.042691   59148 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7305/.minikube
	I1101 00:56:25.044204   59148 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 00:56:25.045719   59148 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 00:56:25.047781   59148 config.go:182] Loaded profile config "default-k8s-diff-port-639310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:56:25.048183   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 00:56:25.048245   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:56:25.062714   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34345
	I1101 00:56:25.063108   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:56:25.063662   59148 main.go:141] libmachine: Using API Version  1
	I1101 00:56:25.063682   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:56:25.064083   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:56:25.064302   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 00:56:25.064571   59148 driver.go:378] Setting default libvirt URI to qemu:///system
	I1101 00:56:25.064917   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 00:56:25.064958   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:56:25.079214   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46451
	I1101 00:56:25.079576   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:56:25.080090   59148 main.go:141] libmachine: Using API Version  1
	I1101 00:56:25.080115   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:56:25.080419   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:56:25.080616   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 00:56:25.119015   59148 out.go:177] * Using the kvm2 driver based on existing profile
	I1101 00:56:25.120650   59148 start.go:298] selected driver: kvm2
	I1101 00:56:25.120670   59148 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-639310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-639310 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.97 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:56:25.120819   59148 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 00:56:25.121515   59148 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:56:25.121580   59148 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17486-7305/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1101 00:56:25.137482   59148 install.go:137] /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1101 00:56:25.137885   59148 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 00:56:25.137962   59148 cni.go:84] Creating CNI manager for ""
	I1101 00:56:25.137976   59148 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 00:56:25.137988   59148 start_flags.go:323] config:
	{Name:default-k8s-diff-port-639310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-63931
0 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.97 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:56:25.138186   59148 iso.go:125] acquiring lock: {Name:mk1f649ca0b7c1ae293cd66cb85f9eeda028b20b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:56:25.140405   59148 out.go:177] * Starting control plane node default-k8s-diff-port-639310 in cluster default-k8s-diff-port-639310
	I1101 00:56:25.141855   59148 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 00:56:25.141918   59148 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1101 00:56:25.141935   59148 cache.go:56] Caching tarball of preloaded images
	I1101 00:56:25.142048   59148 preload.go:174] Found /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 00:56:25.142066   59148 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1101 00:56:25.142204   59148 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/config.json ...
	I1101 00:56:25.142449   59148 start.go:365] acquiring machines lock for default-k8s-diff-port-639310: {Name:mk7aad88408c319111b9be8e59d9593a9e88374b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 00:56:26.060176   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:56:29.132322   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:56:35.212221   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:56:38.284225   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:56:44.364219   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:56:47.436224   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:56:53.516201   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:56:56.588256   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:02.668213   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:05.740252   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:11.820242   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:14.892259   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:20.972213   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:24.044181   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:30.124291   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:33.196239   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:39.276183   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:42.348235   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:48.428230   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:51.500275   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:57.580250   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:00.652208   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:06.732207   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:09.804251   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:15.884265   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:18.956206   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:25.040217   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:28.108288   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:34.188238   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:37.260268   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:43.340210   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:46.412248   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:52.492221   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:55.564188   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:01.644193   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:04.716194   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:10.796265   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:13.868226   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:19.948219   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:23.020283   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:29.100251   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:32.172268   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:38.252219   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:41.324223   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:47.404323   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:50.476273   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:53.480339   58730 start.go:369] acquired machines lock for "embed-certs-754132" in 4m35.118425724s
	I1101 00:59:53.480387   58730 start.go:96] Skipping create...Using existing machine configuration
	I1101 00:59:53.480393   58730 fix.go:54] fixHost starting: 
	I1101 00:59:53.480707   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 00:59:53.480737   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:59:53.495582   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34891
	I1101 00:59:53.495998   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:59:53.496445   58730 main.go:141] libmachine: Using API Version  1
	I1101 00:59:53.496466   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:59:53.496844   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:59:53.497017   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 00:59:53.497171   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetState
	I1101 00:59:53.498937   58730 fix.go:102] recreateIfNeeded on embed-certs-754132: state=Stopped err=<nil>
	I1101 00:59:53.498956   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	W1101 00:59:53.499128   58730 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 00:59:53.500909   58730 out.go:177] * Restarting existing kvm2 VM for "embed-certs-754132" ...
	I1101 00:59:53.478140   58676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 00:59:53.478177   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 00:59:53.480187   58676 machine.go:91] provisioned docker machine in 4m37.408348367s
	I1101 00:59:53.480232   58676 fix.go:56] fixHost completed within 4m37.430154401s
	I1101 00:59:53.480241   58676 start.go:83] releasing machines lock for "no-preload-008483", held for 4m37.430178737s
	W1101 00:59:53.480270   58676 start.go:691] error starting host: provision: host is not running
	W1101 00:59:53.480361   58676 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1101 00:59:53.480371   58676 start.go:706] Will try again in 5 seconds ...
	I1101 00:59:53.502467   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Start
	I1101 00:59:53.502656   58730 main.go:141] libmachine: (embed-certs-754132) Ensuring networks are active...
	I1101 00:59:53.503633   58730 main.go:141] libmachine: (embed-certs-754132) Ensuring network default is active
	I1101 00:59:53.504036   58730 main.go:141] libmachine: (embed-certs-754132) Ensuring network mk-embed-certs-754132 is active
	I1101 00:59:53.504557   58730 main.go:141] libmachine: (embed-certs-754132) Getting domain xml...
	I1101 00:59:53.505302   58730 main.go:141] libmachine: (embed-certs-754132) Creating domain...
	I1101 00:59:54.749625   58730 main.go:141] libmachine: (embed-certs-754132) Waiting to get IP...
	I1101 00:59:54.750551   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:54.750924   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:54.751002   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:54.750917   59675 retry.go:31] will retry after 295.652358ms: waiting for machine to come up
	I1101 00:59:55.048450   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:55.048884   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:55.048910   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:55.048845   59675 retry.go:31] will retry after 335.376353ms: waiting for machine to come up
	I1101 00:59:55.385612   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:55.385959   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:55.386000   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:55.385952   59675 retry.go:31] will retry after 353.381783ms: waiting for machine to come up
	I1101 00:59:55.740456   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:55.740943   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:55.740979   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:55.740874   59675 retry.go:31] will retry after 417.863733ms: waiting for machine to come up
	I1101 00:59:56.160773   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:56.161271   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:56.161298   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:56.161236   59675 retry.go:31] will retry after 659.454883ms: waiting for machine to come up
	I1101 00:59:56.822139   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:56.822551   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:56.822573   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:56.822511   59675 retry.go:31] will retry after 627.06089ms: waiting for machine to come up
	I1101 00:59:57.451254   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:57.451659   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:57.451687   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:57.451624   59675 retry.go:31] will retry after 1.095096876s: waiting for machine to come up
	I1101 00:59:58.481145   58676 start.go:365] acquiring machines lock for no-preload-008483: {Name:mk7aad88408c319111b9be8e59d9593a9e88374b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 00:59:58.548870   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:58.549359   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:58.549410   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:58.549323   59675 retry.go:31] will retry after 1.133377858s: waiting for machine to come up
	I1101 00:59:59.684741   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:59.685182   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:59.685205   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:59.685149   59675 retry.go:31] will retry after 1.332824718s: waiting for machine to come up
	I1101 01:00:01.019662   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:01.020166   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 01:00:01.020217   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 01:00:01.020119   59675 retry.go:31] will retry after 1.62664347s: waiting for machine to come up
	I1101 01:00:02.649017   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:02.649459   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 01:00:02.649490   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 01:00:02.649404   59675 retry.go:31] will retry after 2.043788133s: waiting for machine to come up
	I1101 01:00:04.695225   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:04.695657   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 01:00:04.695711   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 01:00:04.695640   59675 retry.go:31] will retry after 2.435347975s: waiting for machine to come up
	I1101 01:00:07.133078   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:07.133531   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 01:00:07.133567   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 01:00:07.133492   59675 retry.go:31] will retry after 2.768108097s: waiting for machine to come up
	I1101 01:00:09.903094   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:09.903460   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 01:00:09.903484   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 01:00:09.903424   59675 retry.go:31] will retry after 3.955575113s: waiting for machine to come up
	I1101 01:00:15.240546   58823 start.go:369] acquired machines lock for "old-k8s-version-330042" in 4m47.663537715s
	I1101 01:00:15.240608   58823 start.go:96] Skipping create...Using existing machine configuration
	I1101 01:00:15.240616   58823 fix.go:54] fixHost starting: 
	I1101 01:00:15.241087   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:00:15.241135   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:00:15.260921   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45157
	I1101 01:00:15.261342   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:00:15.261921   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:00:15.261954   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:00:15.262285   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:00:15.262488   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:00:15.262657   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetState
	I1101 01:00:15.264332   58823 fix.go:102] recreateIfNeeded on old-k8s-version-330042: state=Stopped err=<nil>
	I1101 01:00:15.264357   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	W1101 01:00:15.264541   58823 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 01:00:15.266960   58823 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-330042" ...
	I1101 01:00:13.860184   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.860818   58730 main.go:141] libmachine: (embed-certs-754132) Found IP for machine: 192.168.61.83
	I1101 01:00:13.860849   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has current primary IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.860866   58730 main.go:141] libmachine: (embed-certs-754132) Reserving static IP address...
	I1101 01:00:13.861321   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "embed-certs-754132", mac: "52:54:00:5e:2f:dd", ip: "192.168.61.83"} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:13.861350   58730 main.go:141] libmachine: (embed-certs-754132) Reserved static IP address: 192.168.61.83
	I1101 01:00:13.861362   58730 main.go:141] libmachine: (embed-certs-754132) DBG | skip adding static IP to network mk-embed-certs-754132 - found existing host DHCP lease matching {name: "embed-certs-754132", mac: "52:54:00:5e:2f:dd", ip: "192.168.61.83"}
	I1101 01:00:13.861372   58730 main.go:141] libmachine: (embed-certs-754132) Waiting for SSH to be available...
	I1101 01:00:13.861384   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Getting to WaitForSSH function...
	I1101 01:00:13.864760   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.865204   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:13.865232   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.865368   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Using SSH client type: external
	I1101 01:00:13.865408   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa (-rw-------)
	I1101 01:00:13.865434   58730 main.go:141] libmachine: (embed-certs-754132) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.83 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1101 01:00:13.865446   58730 main.go:141] libmachine: (embed-certs-754132) DBG | About to run SSH command:
	I1101 01:00:13.865454   58730 main.go:141] libmachine: (embed-certs-754132) DBG | exit 0
	I1101 01:00:13.964103   58730 main.go:141] libmachine: (embed-certs-754132) DBG | SSH cmd err, output: <nil>: 
	I1101 01:00:13.964444   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetConfigRaw
	I1101 01:00:13.965066   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetIP
	I1101 01:00:13.967463   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.967768   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:13.967791   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.968100   58730 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/config.json ...
	I1101 01:00:13.968294   58730 machine.go:88] provisioning docker machine ...
	I1101 01:00:13.968312   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:00:13.968530   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetMachineName
	I1101 01:00:13.968707   58730 buildroot.go:166] provisioning hostname "embed-certs-754132"
	I1101 01:00:13.968728   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetMachineName
	I1101 01:00:13.968901   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:13.971288   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.971637   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:13.971676   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.971792   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:13.972000   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:13.972181   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:13.972312   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:13.972476   58730 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:13.972798   58730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I1101 01:00:13.972812   58730 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-754132 && echo "embed-certs-754132" | sudo tee /etc/hostname
	I1101 01:00:14.121000   58730 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-754132
	
	I1101 01:00:14.121036   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:14.124379   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.124813   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:14.124840   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.125085   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:14.125339   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:14.125667   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:14.125832   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:14.126091   58730 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:14.126401   58730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I1101 01:00:14.126418   58730 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-754132' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-754132/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-754132' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 01:00:14.268155   58730 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 01:00:14.268188   58730 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7305/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7305/.minikube}
	I1101 01:00:14.268210   58730 buildroot.go:174] setting up certificates
	I1101 01:00:14.268238   58730 provision.go:83] configureAuth start
	I1101 01:00:14.268255   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetMachineName
	I1101 01:00:14.268542   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetIP
	I1101 01:00:14.271516   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.271946   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:14.271984   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.272150   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:14.274610   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.275017   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:14.275054   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.275206   58730 provision.go:138] copyHostCerts
	I1101 01:00:14.275269   58730 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem, removing ...
	I1101 01:00:14.275282   58730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 01:00:14.275351   58730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem (1078 bytes)
	I1101 01:00:14.275442   58730 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem, removing ...
	I1101 01:00:14.275450   58730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 01:00:14.275475   58730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem (1123 bytes)
	I1101 01:00:14.275526   58730 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem, removing ...
	I1101 01:00:14.275533   58730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 01:00:14.275571   58730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem (1675 bytes)
	I1101 01:00:14.275616   58730 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem org=jenkins.embed-certs-754132 san=[192.168.61.83 192.168.61.83 localhost 127.0.0.1 minikube embed-certs-754132]
	I1101 01:00:14.494175   58730 provision.go:172] copyRemoteCerts
	I1101 01:00:14.494239   58730 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 01:00:14.494265   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:14.496921   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.497263   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:14.497310   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.497482   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:14.497748   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:14.497906   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:14.498052   58730 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa Username:docker}
	I1101 01:00:14.592739   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 01:00:14.614862   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1101 01:00:14.636483   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1101 01:00:14.658154   58730 provision.go:86] duration metric: configureAuth took 389.900669ms
	I1101 01:00:14.658179   58730 buildroot.go:189] setting minikube options for container-runtime
	I1101 01:00:14.658364   58730 config.go:182] Loaded profile config "embed-certs-754132": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:00:14.658478   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:14.661110   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.661450   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:14.661500   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.661667   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:14.661853   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:14.661997   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:14.662120   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:14.662279   58730 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:14.662573   58730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I1101 01:00:14.662589   58730 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 01:00:14.974481   58730 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 01:00:14.974505   58730 machine.go:91] provisioned docker machine in 1.006198078s
	I1101 01:00:14.974521   58730 start.go:300] post-start starting for "embed-certs-754132" (driver="kvm2")
	I1101 01:00:14.974534   58730 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 01:00:14.974556   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:00:14.974913   58730 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 01:00:14.974946   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:14.977485   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.977815   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:14.977846   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.977970   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:14.978146   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:14.978310   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:14.978470   58730 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa Username:docker}
	I1101 01:00:15.073889   58730 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 01:00:15.077710   58730 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 01:00:15.077734   58730 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/addons for local assets ...
	I1101 01:00:15.077791   58730 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/files for local assets ...
	I1101 01:00:15.077855   58730 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> 145042.pem in /etc/ssl/certs
	I1101 01:00:15.077961   58730 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 01:00:15.086567   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:00:15.107446   58730 start.go:303] post-start completed in 132.911351ms
	I1101 01:00:15.107468   58730 fix.go:56] fixHost completed within 21.627074953s
	I1101 01:00:15.107485   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:15.110070   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.110392   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:15.110426   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.110552   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:15.110748   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:15.110914   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:15.111078   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:15.111268   58730 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:15.111683   58730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I1101 01:00:15.111696   58730 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1101 01:00:15.240326   58730 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698800415.188118531
	
	I1101 01:00:15.240357   58730 fix.go:206] guest clock: 1698800415.188118531
	I1101 01:00:15.240365   58730 fix.go:219] Guest: 2023-11-01 01:00:15.188118531 +0000 UTC Remote: 2023-11-01 01:00:15.107470988 +0000 UTC m=+296.909935143 (delta=80.647543ms)
	I1101 01:00:15.240385   58730 fix.go:190] guest clock delta is within tolerance: 80.647543ms
	I1101 01:00:15.240420   58730 start.go:83] releasing machines lock for "embed-certs-754132", held for 21.760022516s
	I1101 01:00:15.240464   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:00:15.240736   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetIP
	I1101 01:00:15.243570   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.243905   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:15.243961   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.244163   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:00:15.244698   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:00:15.244872   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:00:15.244948   58730 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 01:00:15.245012   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:15.245063   58730 ssh_runner.go:195] Run: cat /version.json
	I1101 01:00:15.245089   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:15.247618   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.247886   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.247985   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:15.248018   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.248265   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:15.248358   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:15.248387   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.248422   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:15.248600   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:15.248601   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:15.248774   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:15.248765   58730 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa Username:docker}
	I1101 01:00:15.248913   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:15.249034   58730 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa Username:docker}
	I1101 01:00:15.383514   58730 ssh_runner.go:195] Run: systemctl --version
	I1101 01:00:15.389291   58730 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 01:00:15.531982   58730 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 01:00:15.537622   58730 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 01:00:15.537711   58730 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 01:00:15.554440   58730 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 01:00:15.554488   58730 start.go:472] detecting cgroup driver to use...
	I1101 01:00:15.554549   58730 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 01:00:15.569732   58730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 01:00:15.582752   58730 docker.go:204] disabling cri-docker service (if available) ...
	I1101 01:00:15.582795   58730 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 01:00:15.596221   58730 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 01:00:15.609815   58730 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 01:00:15.717679   58730 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 01:00:15.842128   58730 docker.go:220] disabling docker service ...
	I1101 01:00:15.842203   58730 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 01:00:15.854613   58730 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 01:00:15.869487   58730 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 01:00:15.991107   58730 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 01:00:16.118392   58730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 01:00:16.131570   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 01:00:16.150691   58730 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 01:00:16.150755   58730 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:16.160081   58730 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 01:00:16.160171   58730 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:16.170277   58730 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:16.180469   58730 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:16.189966   58730 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 01:00:16.199465   58730 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 01:00:16.207995   58730 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 01:00:16.208057   58730 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 01:00:16.221491   58730 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 01:00:16.231855   58730 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 01:00:16.355227   58730 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 01:00:16.520341   58730 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 01:00:16.520403   58730 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 01:00:16.525071   58730 start.go:540] Will wait 60s for crictl version
	I1101 01:00:16.525143   58730 ssh_runner.go:195] Run: which crictl
	I1101 01:00:16.529138   58730 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 01:00:16.566007   58730 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1101 01:00:16.566082   58730 ssh_runner.go:195] Run: crio --version
	I1101 01:00:16.612652   58730 ssh_runner.go:195] Run: crio --version
	I1101 01:00:16.665668   58730 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1101 01:00:15.268389   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Start
	I1101 01:00:15.268575   58823 main.go:141] libmachine: (old-k8s-version-330042) Ensuring networks are active...
	I1101 01:00:15.269280   58823 main.go:141] libmachine: (old-k8s-version-330042) Ensuring network default is active
	I1101 01:00:15.269618   58823 main.go:141] libmachine: (old-k8s-version-330042) Ensuring network mk-old-k8s-version-330042 is active
	I1101 01:00:15.270056   58823 main.go:141] libmachine: (old-k8s-version-330042) Getting domain xml...
	I1101 01:00:15.270814   58823 main.go:141] libmachine: (old-k8s-version-330042) Creating domain...
	I1101 01:00:16.566526   58823 main.go:141] libmachine: (old-k8s-version-330042) Waiting to get IP...
	I1101 01:00:16.567713   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:16.568239   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:16.568336   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:16.568220   59797 retry.go:31] will retry after 200.046919ms: waiting for machine to come up
	I1101 01:00:16.769849   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:16.770436   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:16.770477   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:16.770427   59797 retry.go:31] will retry after 301.397937ms: waiting for machine to come up
	I1101 01:00:17.074180   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:17.074657   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:17.074689   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:17.074626   59797 retry.go:31] will retry after 462.511505ms: waiting for machine to come up
	I1101 01:00:16.667657   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetIP
	I1101 01:00:16.670756   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:16.671148   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:16.671216   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:16.671377   58730 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1101 01:00:16.675342   58730 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:00:16.687224   58730 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 01:00:16.687310   58730 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:00:16.726714   58730 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1101 01:00:16.726779   58730 ssh_runner.go:195] Run: which lz4
	I1101 01:00:16.730745   58730 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1101 01:00:16.734588   58730 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 01:00:16.734623   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1101 01:00:17.538840   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:17.539313   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:17.539337   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:17.539276   59797 retry.go:31] will retry after 562.894181ms: waiting for machine to come up
	I1101 01:00:18.104173   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:18.104678   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:18.104712   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:18.104641   59797 retry.go:31] will retry after 659.582768ms: waiting for machine to come up
	I1101 01:00:18.766319   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:18.766719   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:18.766749   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:18.766688   59797 retry.go:31] will retry after 626.783168ms: waiting for machine to come up
	I1101 01:00:19.395203   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:19.395693   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:19.395720   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:19.395651   59797 retry.go:31] will retry after 884.294618ms: waiting for machine to come up
	I1101 01:00:20.281677   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:20.282152   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:20.282176   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:20.282094   59797 retry.go:31] will retry after 997.794459ms: waiting for machine to come up
	I1101 01:00:21.281118   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:21.281568   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:21.281596   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:21.281525   59797 retry.go:31] will retry after 1.624252325s: waiting for machine to come up
	I1101 01:00:18.514400   58730 crio.go:444] Took 1.783693 seconds to copy over tarball
	I1101 01:00:18.514460   58730 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 01:00:21.481089   58730 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.966600648s)
	I1101 01:00:21.481118   58730 crio.go:451] Took 2.966695 seconds to extract the tarball
	I1101 01:00:21.481130   58730 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 01:00:21.520934   58730 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:00:21.568541   58730 crio.go:496] all images are preloaded for cri-o runtime.
	I1101 01:00:21.568569   58730 cache_images.go:84] Images are preloaded, skipping loading
	I1101 01:00:21.568638   58730 ssh_runner.go:195] Run: crio config
	I1101 01:00:21.626687   58730 cni.go:84] Creating CNI manager for ""
	I1101 01:00:21.626707   58730 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:00:21.626724   58730 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 01:00:21.626745   58730 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.83 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-754132 NodeName:embed-certs-754132 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.83"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.83 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 01:00:21.626906   58730 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.83
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-754132"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.83
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.83"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 01:00:21.627000   58730 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-754132 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.83
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:embed-certs-754132 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 01:00:21.627062   58730 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 01:00:21.635965   58730 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 01:00:21.636048   58730 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 01:00:21.644318   58730 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1101 01:00:21.659722   58730 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 01:00:21.674541   58730 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1101 01:00:21.690451   58730 ssh_runner.go:195] Run: grep 192.168.61.83	control-plane.minikube.internal$ /etc/hosts
	I1101 01:00:21.694013   58730 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.83	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:00:21.705929   58730 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132 for IP: 192.168.61.83
	I1101 01:00:21.705978   58730 certs.go:190] acquiring lock for shared ca certs: {Name:mk6185b3938c4b51e80f57b7c81926c81b632b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:00:21.706152   58730 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key
	I1101 01:00:21.706193   58730 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key
	I1101 01:00:21.706255   58730 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/client.key
	I1101 01:00:21.706321   58730 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/apiserver.key.00ce3257
	I1101 01:00:21.706365   58730 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/proxy-client.key
	I1101 01:00:21.706507   58730 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem (1338 bytes)
	W1101 01:00:21.706541   58730 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504_empty.pem, impossibly tiny 0 bytes
	I1101 01:00:21.706552   58730 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 01:00:21.706580   58730 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem (1078 bytes)
	I1101 01:00:21.706606   58730 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem (1123 bytes)
	I1101 01:00:21.706633   58730 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem (1675 bytes)
	I1101 01:00:21.706670   58730 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:00:21.707263   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 01:00:21.734199   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 01:00:21.760230   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 01:00:21.787083   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 01:00:21.810498   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 01:00:21.833905   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 01:00:21.859073   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 01:00:21.881222   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 01:00:21.904432   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem --> /usr/share/ca-certificates/14504.pem (1338 bytes)
	I1101 01:00:21.934873   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /usr/share/ca-certificates/145042.pem (1708 bytes)
	I1101 01:00:21.958353   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 01:00:21.981353   58730 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 01:00:21.997436   58730 ssh_runner.go:195] Run: openssl version
	I1101 01:00:22.003487   58730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14504.pem && ln -fs /usr/share/ca-certificates/14504.pem /etc/ssl/certs/14504.pem"
	I1101 01:00:22.013829   58730 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14504.pem
	I1101 01:00:22.018482   58730 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:54 /usr/share/ca-certificates/14504.pem
	I1101 01:00:22.018554   58730 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14504.pem
	I1101 01:00:22.024695   58730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14504.pem /etc/ssl/certs/51391683.0"
	I1101 01:00:22.034956   58730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145042.pem && ln -fs /usr/share/ca-certificates/145042.pem /etc/ssl/certs/145042.pem"
	I1101 01:00:22.046182   58730 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145042.pem
	I1101 01:00:22.051197   58730 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:54 /usr/share/ca-certificates/145042.pem
	I1101 01:00:22.051273   58730 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145042.pem
	I1101 01:00:22.057145   58730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145042.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 01:00:22.067337   58730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 01:00:22.077300   58730 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:00:22.081973   58730 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:00:22.082025   58730 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:00:22.087341   58730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 01:00:22.097021   58730 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 01:00:22.101801   58730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 01:00:22.107498   58730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 01:00:22.113187   58730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 01:00:22.119281   58730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 01:00:22.125109   58730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 01:00:22.130878   58730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 01:00:22.136711   58730 kubeadm.go:404] StartCluster: {Name:embed-certs-754132 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:embed-certs-754132 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.83 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 01:00:22.136843   58730 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 01:00:22.136898   58730 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:00:22.172188   58730 cri.go:89] found id: ""
	I1101 01:00:22.172267   58730 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 01:00:22.181863   58730 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1101 01:00:22.181901   58730 kubeadm.go:636] restartCluster start
	I1101 01:00:22.181962   58730 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 01:00:22.190970   58730 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:22.192108   58730 kubeconfig.go:92] found "embed-certs-754132" server: "https://192.168.61.83:8443"
	I1101 01:00:22.194633   58730 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 01:00:22.203708   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:22.203792   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:22.214867   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:22.214889   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:22.214972   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:22.225940   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:22.726677   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:22.726769   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:22.737874   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:23.226416   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:23.226492   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:23.237902   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:22.907053   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:22.907532   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:22.907563   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:22.907487   59797 retry.go:31] will retry after 2.170221456s: waiting for machine to come up
	I1101 01:00:25.079354   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:25.079791   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:25.079831   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:25.079754   59797 retry.go:31] will retry after 2.279141994s: waiting for machine to come up
	I1101 01:00:27.361955   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:27.362423   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:27.362456   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:27.362368   59797 retry.go:31] will retry after 2.772425742s: waiting for machine to come up
	I1101 01:00:23.726108   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:23.726179   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:23.737404   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:24.226007   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:24.226178   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:24.237401   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:24.727058   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:24.727152   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:24.742704   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:25.226166   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:25.226272   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:25.237808   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:25.726161   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:25.726244   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:25.737763   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:26.226321   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:26.226485   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:26.239919   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:26.726488   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:26.726596   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:26.740719   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:27.226157   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:27.226268   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:27.240719   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:27.726272   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:27.726360   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:27.738068   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:28.226882   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:28.226954   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:28.239208   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:30.136893   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:30.137311   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:30.137333   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:30.137274   59797 retry.go:31] will retry after 4.191062934s: waiting for machine to come up
	I1101 01:00:28.726726   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:28.726845   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:28.737955   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:29.226410   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:29.226475   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:29.237886   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:29.726367   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:29.726461   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:29.737767   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:30.226294   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:30.226389   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:30.237767   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:30.726295   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:30.726363   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:30.737691   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:31.226274   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:31.226343   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:31.237801   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:31.726297   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:31.726366   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:31.738060   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:32.204696   58730 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1101 01:00:32.204729   58730 kubeadm.go:1128] stopping kube-system containers ...
	I1101 01:00:32.204741   58730 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 01:00:32.204792   58730 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:00:32.241943   58730 cri.go:89] found id: ""
	I1101 01:00:32.242012   58730 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 01:00:32.256657   58730 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:00:32.265087   58730 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:00:32.265159   58730 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:00:32.273631   58730 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1101 01:00:32.273654   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:32.379073   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:35.634014   59148 start.go:369] acquired machines lock for "default-k8s-diff-port-639310" in 4m10.491521982s
	I1101 01:00:35.634070   59148 start.go:96] Skipping create...Using existing machine configuration
	I1101 01:00:35.634078   59148 fix.go:54] fixHost starting: 
	I1101 01:00:35.634533   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:00:35.634577   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:00:35.654259   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46439
	I1101 01:00:35.654746   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:00:35.655216   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:00:35.655245   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:00:35.655578   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:00:35.655759   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:00:35.655905   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetState
	I1101 01:00:35.657604   59148 fix.go:102] recreateIfNeeded on default-k8s-diff-port-639310: state=Stopped err=<nil>
	I1101 01:00:35.657646   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	W1101 01:00:35.657804   59148 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 01:00:35.660028   59148 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-639310" ...
	I1101 01:00:34.332963   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.333486   58823 main.go:141] libmachine: (old-k8s-version-330042) Found IP for machine: 192.168.39.90
	I1101 01:00:34.333518   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has current primary IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.333529   58823 main.go:141] libmachine: (old-k8s-version-330042) Reserving static IP address...
	I1101 01:00:34.333853   58823 main.go:141] libmachine: (old-k8s-version-330042) Reserved static IP address: 192.168.39.90
	I1101 01:00:34.333874   58823 main.go:141] libmachine: (old-k8s-version-330042) Waiting for SSH to be available...
	I1101 01:00:34.333901   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "old-k8s-version-330042", mac: "52:54:00:a2:40:80", ip: "192.168.39.90"} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.333932   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | skip adding static IP to network mk-old-k8s-version-330042 - found existing host DHCP lease matching {name: "old-k8s-version-330042", mac: "52:54:00:a2:40:80", ip: "192.168.39.90"}
	I1101 01:00:34.333954   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Getting to WaitForSSH function...
	I1101 01:00:34.335871   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.336238   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.336275   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.336409   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Using SSH client type: external
	I1101 01:00:34.336446   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa (-rw-------)
	I1101 01:00:34.336480   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.90 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1101 01:00:34.336501   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | About to run SSH command:
	I1101 01:00:34.336523   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | exit 0
	I1101 01:00:34.431938   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | SSH cmd err, output: <nil>: 
	I1101 01:00:34.432324   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetConfigRaw
	I1101 01:00:34.433070   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetIP
	I1101 01:00:34.435967   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.436402   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.436434   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.436696   58823 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/config.json ...
	I1101 01:00:34.436886   58823 machine.go:88] provisioning docker machine ...
	I1101 01:00:34.436903   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:00:34.437136   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetMachineName
	I1101 01:00:34.437299   58823 buildroot.go:166] provisioning hostname "old-k8s-version-330042"
	I1101 01:00:34.437323   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetMachineName
	I1101 01:00:34.437508   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:34.439785   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.440175   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.440215   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.440316   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:34.440481   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:34.440662   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:34.440800   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:34.440965   58823 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:34.441440   58823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1101 01:00:34.441461   58823 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-330042 && echo "old-k8s-version-330042" | sudo tee /etc/hostname
	I1101 01:00:34.590132   58823 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-330042
	
	I1101 01:00:34.590168   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:34.593018   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.593457   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.593521   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.593623   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:34.593817   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:34.594004   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:34.594151   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:34.594317   58823 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:34.594622   58823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1101 01:00:34.594640   58823 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-330042' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-330042/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-330042' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 01:00:34.743448   58823 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 01:00:34.743485   58823 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7305/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7305/.minikube}
	I1101 01:00:34.743510   58823 buildroot.go:174] setting up certificates
	I1101 01:00:34.743530   58823 provision.go:83] configureAuth start
	I1101 01:00:34.743545   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetMachineName
	I1101 01:00:34.743848   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetIP
	I1101 01:00:34.746932   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.747302   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.747333   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.747478   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:34.749794   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.750154   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.750185   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.750339   58823 provision.go:138] copyHostCerts
	I1101 01:00:34.750412   58823 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem, removing ...
	I1101 01:00:34.750435   58823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 01:00:34.750504   58823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem (1078 bytes)
	I1101 01:00:34.750620   58823 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem, removing ...
	I1101 01:00:34.750628   58823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 01:00:34.750655   58823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem (1123 bytes)
	I1101 01:00:34.750726   58823 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem, removing ...
	I1101 01:00:34.750736   58823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 01:00:34.750761   58823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem (1675 bytes)
	I1101 01:00:34.750820   58823 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-330042 san=[192.168.39.90 192.168.39.90 localhost 127.0.0.1 minikube old-k8s-version-330042]
	I1101 01:00:34.819269   58823 provision.go:172] copyRemoteCerts
	I1101 01:00:34.819327   58823 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 01:00:34.819354   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:34.822409   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.822852   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.822887   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.823101   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:34.823335   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:34.823520   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:34.823688   58823 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa Username:docker}
	I1101 01:00:34.928534   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 01:00:34.955140   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1101 01:00:34.982361   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 01:00:35.007980   58823 provision.go:86] duration metric: configureAuth took 264.432358ms
	I1101 01:00:35.008007   58823 buildroot.go:189] setting minikube options for container-runtime
	I1101 01:00:35.008317   58823 config.go:182] Loaded profile config "old-k8s-version-330042": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1101 01:00:35.008450   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:35.011424   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.011790   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:35.011820   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.012054   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:35.012305   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:35.012505   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:35.012692   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:35.012898   58823 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:35.013292   58823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1101 01:00:35.013310   58823 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 01:00:35.345179   58823 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 01:00:35.345210   58823 machine.go:91] provisioned docker machine in 908.310008ms
	I1101 01:00:35.345224   58823 start.go:300] post-start starting for "old-k8s-version-330042" (driver="kvm2")
	I1101 01:00:35.345236   58823 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 01:00:35.345283   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:00:35.345634   58823 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 01:00:35.345666   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:35.348576   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.348945   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:35.348978   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.349171   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:35.349364   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:35.349527   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:35.349672   58823 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa Username:docker}
	I1101 01:00:35.448239   58823 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 01:00:35.453459   58823 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 01:00:35.453495   58823 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/addons for local assets ...
	I1101 01:00:35.453589   58823 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/files for local assets ...
	I1101 01:00:35.453705   58823 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> 145042.pem in /etc/ssl/certs
	I1101 01:00:35.453819   58823 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 01:00:35.464658   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:00:35.488669   58823 start.go:303] post-start completed in 143.429717ms
	I1101 01:00:35.488699   58823 fix.go:56] fixHost completed within 20.248082329s
	I1101 01:00:35.488723   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:35.491535   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.491917   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:35.491962   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.492108   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:35.492302   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:35.492472   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:35.492610   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:35.492777   58823 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:35.493085   58823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1101 01:00:35.493097   58823 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1101 01:00:35.633831   58823 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698800435.580601462
	
	I1101 01:00:35.633860   58823 fix.go:206] guest clock: 1698800435.580601462
	I1101 01:00:35.633872   58823 fix.go:219] Guest: 2023-11-01 01:00:35.580601462 +0000 UTC Remote: 2023-11-01 01:00:35.488703086 +0000 UTC m=+308.076532844 (delta=91.898376ms)
	I1101 01:00:35.633899   58823 fix.go:190] guest clock delta is within tolerance: 91.898376ms
	I1101 01:00:35.633906   58823 start.go:83] releasing machines lock for "old-k8s-version-330042", held for 20.393324923s
	I1101 01:00:35.633937   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:00:35.634276   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetIP
	I1101 01:00:35.637052   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.637411   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:35.637462   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.637668   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:00:35.638239   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:00:35.638479   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:00:35.638661   58823 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 01:00:35.638703   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:35.638792   58823 ssh_runner.go:195] Run: cat /version.json
	I1101 01:00:35.638813   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:35.641913   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:35.641919   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.642071   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:35.642094   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.642106   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.642151   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:35.642323   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:35.642517   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:35.642547   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.642608   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:35.642640   58823 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa Username:docker}
	I1101 01:00:35.642736   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:35.642872   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:35.642994   58823 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa Username:docker}
	I1101 01:00:35.772469   58823 ssh_runner.go:195] Run: systemctl --version
	I1101 01:00:35.778377   58823 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 01:00:35.930189   58823 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 01:00:35.937481   58823 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 01:00:35.937583   58823 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 01:00:35.959054   58823 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 01:00:35.959081   58823 start.go:472] detecting cgroup driver to use...
	I1101 01:00:35.959166   58823 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 01:00:35.978338   58823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 01:00:35.994627   58823 docker.go:204] disabling cri-docker service (if available) ...
	I1101 01:00:35.994690   58823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 01:00:36.010212   58823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 01:00:36.025616   58823 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 01:00:36.132484   58823 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 01:00:36.266531   58823 docker.go:220] disabling docker service ...
	I1101 01:00:36.266604   58823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 01:00:36.280303   58823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 01:00:36.291905   58823 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 01:00:36.413114   58823 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 01:00:36.527297   58823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 01:00:36.540547   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 01:00:36.561997   58823 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1101 01:00:36.562070   58823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:36.574735   58823 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 01:00:36.574809   58823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:36.584015   58823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:36.592896   58823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:36.602199   58823 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 01:00:36.611742   58823 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 01:00:36.620073   58823 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 01:00:36.620140   58823 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 01:00:36.633237   58823 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 01:00:36.641679   58823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 01:00:36.786323   58823 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 01:00:37.011240   58823 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 01:00:37.011332   58823 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 01:00:37.016349   58823 start.go:540] Will wait 60s for crictl version
	I1101 01:00:37.016417   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:37.020952   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 01:00:37.068566   58823 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1101 01:00:37.068649   58823 ssh_runner.go:195] Run: crio --version
	I1101 01:00:37.119257   58823 ssh_runner.go:195] Run: crio --version
	I1101 01:00:37.170471   58823 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1101 01:00:37.172128   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetIP
	I1101 01:00:37.175116   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:37.175552   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:37.175583   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:37.175834   58823 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1101 01:00:37.179970   58823 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:00:37.193466   58823 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1101 01:00:37.193550   58823 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:00:37.239780   58823 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1101 01:00:37.239851   58823 ssh_runner.go:195] Run: which lz4
	I1101 01:00:37.243871   58823 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1101 01:00:37.248203   58823 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 01:00:37.248243   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1101 01:00:33.273385   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:33.468847   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:33.558663   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:33.632226   58730 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:00:33.632305   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:33.645291   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:34.159920   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:34.660339   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:35.159837   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:35.659362   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:36.159870   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:36.189698   58730 api_server.go:72] duration metric: took 2.557471176s to wait for apiserver process to appear ...
	I1101 01:00:36.189726   58730 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:00:36.189746   58730 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8443/healthz ...
	I1101 01:00:35.662001   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Start
	I1101 01:00:35.662248   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Ensuring networks are active...
	I1101 01:00:35.663075   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Ensuring network default is active
	I1101 01:00:35.663589   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Ensuring network mk-default-k8s-diff-port-639310 is active
	I1101 01:00:35.664066   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Getting domain xml...
	I1101 01:00:35.664780   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Creating domain...
	I1101 01:00:37.046385   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting to get IP...
	I1101 01:00:37.047592   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.048056   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.048160   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:37.048064   59967 retry.go:31] will retry after 244.19131ms: waiting for machine to come up
	I1101 01:00:37.293636   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.294421   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.294535   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:37.294483   59967 retry.go:31] will retry after 281.302105ms: waiting for machine to come up
	I1101 01:00:37.577271   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.577934   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.577962   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:37.577874   59967 retry.go:31] will retry after 376.713113ms: waiting for machine to come up
	I1101 01:00:37.956666   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.957154   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.957182   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:37.957125   59967 retry.go:31] will retry after 366.92844ms: waiting for machine to come up
	I1101 01:00:38.325741   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:38.326257   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:38.326291   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:38.326226   59967 retry.go:31] will retry after 478.435824ms: waiting for machine to come up
	I1101 01:00:38.806215   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:38.806928   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:38.806965   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:38.806904   59967 retry.go:31] will retry after 910.120665ms: waiting for machine to come up
	I1101 01:00:39.718641   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:39.719281   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:39.719307   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:39.719210   59967 retry.go:31] will retry after 1.017844602s: waiting for machine to come up
	I1101 01:00:40.636542   58730 api_server.go:279] https://192.168.61.83:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 01:00:40.636586   58730 api_server.go:103] status: https://192.168.61.83:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 01:00:40.636602   58730 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8443/healthz ...
	I1101 01:00:40.687211   58730 api_server.go:279] https://192.168.61.83:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 01:00:40.687258   58730 api_server.go:103] status: https://192.168.61.83:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 01:00:41.187988   58730 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8443/healthz ...
	I1101 01:00:41.197585   58730 api_server.go:279] https://192.168.61.83:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:00:41.197626   58730 api_server.go:103] status: https://192.168.61.83:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:00:41.688019   58730 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8443/healthz ...
	I1101 01:00:41.698406   58730 api_server.go:279] https://192.168.61.83:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:00:41.698439   58730 api_server.go:103] status: https://192.168.61.83:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:00:42.188141   58730 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8443/healthz ...
	I1101 01:00:42.195663   58730 api_server.go:279] https://192.168.61.83:8443/healthz returned 200:
	ok
	I1101 01:00:42.204715   58730 api_server.go:141] control plane version: v1.28.3
	I1101 01:00:42.204746   58730 api_server.go:131] duration metric: took 6.015012484s to wait for apiserver health ...
	I1101 01:00:42.204756   58730 cni.go:84] Creating CNI manager for ""
	I1101 01:00:42.204764   58730 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:00:42.206831   58730 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:00:38.979032   58823 crio.go:444] Took 1.735199 seconds to copy over tarball
	I1101 01:00:38.979127   58823 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 01:00:42.235526   58823 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.256363592s)
	I1101 01:00:42.235558   58823 crio.go:451] Took 3.256498 seconds to extract the tarball
	I1101 01:00:42.235592   58823 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 01:00:42.278508   58823 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:00:42.332199   58823 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1101 01:00:42.332225   58823 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1101 01:00:42.332323   58823 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:00:42.332383   58823 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1101 01:00:42.332425   58823 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1101 01:00:42.332445   58823 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1101 01:00:42.332394   58823 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1101 01:00:42.332554   58823 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1101 01:00:42.332552   58823 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1101 01:00:42.332611   58823 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1101 01:00:42.333952   58823 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1101 01:00:42.333965   58823 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1101 01:00:42.333971   58823 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1101 01:00:42.333973   58823 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:00:42.333951   58823 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1101 01:00:42.333959   58823 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1101 01:00:42.334015   58823 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1101 01:00:42.334422   58823 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1101 01:00:42.208425   58730 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:00:42.243672   58730 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:00:42.270472   58730 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:00:40.739283   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:40.739839   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:40.739871   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:40.739751   59967 retry.go:31] will retry after 924.830892ms: waiting for machine to come up
	I1101 01:00:41.666231   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:41.666922   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:41.666949   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:41.666878   59967 retry.go:31] will retry after 1.792434708s: waiting for machine to come up
	I1101 01:00:43.461158   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:43.461723   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:43.461758   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:43.461651   59967 retry.go:31] will retry after 1.458280506s: waiting for machine to come up
	I1101 01:00:44.921321   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:44.922072   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:44.922105   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:44.922018   59967 retry.go:31] will retry after 2.732488928s: waiting for machine to come up
	I1101 01:00:42.548949   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1101 01:00:42.549011   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1101 01:00:42.552787   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1101 01:00:42.554125   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1101 01:00:42.559301   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1101 01:00:42.560733   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1101 01:00:42.564609   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1101 01:00:42.857456   58823 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1101 01:00:42.857497   58823 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1101 01:00:42.857537   58823 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1101 01:00:42.857565   58823 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1101 01:00:42.857583   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:42.857502   58823 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1101 01:00:42.857597   58823 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1101 01:00:42.857644   58823 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1101 01:00:42.857703   58823 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1101 01:00:42.857733   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:42.857663   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:42.857666   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:42.880301   58823 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1101 01:00:42.880350   58823 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1101 01:00:42.880362   58823 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1101 01:00:42.880404   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:42.880421   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1101 01:00:42.880432   58823 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1101 01:00:42.880473   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:42.880475   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1101 01:00:42.880542   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1101 01:00:42.880377   58823 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1101 01:00:42.880587   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1101 01:00:42.880610   58823 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1101 01:00:42.880663   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:42.958449   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1101 01:00:42.975151   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1101 01:00:42.975188   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1101 01:00:42.979136   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1101 01:00:42.979198   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1101 01:00:42.979246   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1101 01:00:42.979306   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1101 01:00:43.059447   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1101 01:00:43.059470   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1101 01:00:43.059515   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1101 01:00:43.059572   58823 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1101 01:00:43.065313   58823 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1101 01:00:43.065337   58823 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1101 01:00:43.065397   58823 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1101 01:00:43.212775   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:00:44.821509   58823 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.756075689s)
	I1101 01:00:44.821542   58823 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1101 01:00:44.821600   58823 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.608800531s)
	I1101 01:00:44.821639   58823 cache_images.go:92] LoadImages completed in 2.489401317s
	W1101 01:00:44.821749   58823 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0: no such file or directory
	I1101 01:00:44.821888   58823 ssh_runner.go:195] Run: crio config
	I1101 01:00:44.911017   58823 cni.go:84] Creating CNI manager for ""
	I1101 01:00:44.911094   58823 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:00:44.911132   58823 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 01:00:44.911173   58823 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.90 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-330042 NodeName:old-k8s-version-330042 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1101 01:00:44.911365   58823 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-330042"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.90
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.90"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-330042
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.90:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 01:00:44.911510   58823 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-330042 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-330042 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 01:00:44.911601   58823 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1101 01:00:44.925733   58823 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 01:00:44.925810   58823 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 01:00:44.939166   58823 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1101 01:00:44.962847   58823 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 01:00:44.986855   58823 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I1101 01:00:45.011998   58823 ssh_runner.go:195] Run: grep 192.168.39.90	control-plane.minikube.internal$ /etc/hosts
	I1101 01:00:45.017160   58823 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.90	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:00:45.035826   58823 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042 for IP: 192.168.39.90
	I1101 01:00:45.035866   58823 certs.go:190] acquiring lock for shared ca certs: {Name:mk6185b3938c4b51e80f57b7c81926c81b632b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:00:45.036097   58823 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key
	I1101 01:00:45.036161   58823 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key
	I1101 01:00:45.036276   58823 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/client.key
	I1101 01:00:45.036363   58823 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/apiserver.key.05a13cdc
	I1101 01:00:45.036423   58823 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/proxy-client.key
	I1101 01:00:45.036600   58823 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem (1338 bytes)
	W1101 01:00:45.036642   58823 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504_empty.pem, impossibly tiny 0 bytes
	I1101 01:00:45.036657   58823 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 01:00:45.036697   58823 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem (1078 bytes)
	I1101 01:00:45.036734   58823 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem (1123 bytes)
	I1101 01:00:45.036769   58823 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem (1675 bytes)
	I1101 01:00:45.036837   58823 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:00:45.037808   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 01:00:45.071828   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 01:00:45.105069   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 01:00:45.136650   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 01:00:45.169633   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 01:00:45.202102   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 01:00:45.234227   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 01:00:45.265901   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 01:00:45.297720   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem --> /usr/share/ca-certificates/14504.pem (1338 bytes)
	I1101 01:00:45.330915   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /usr/share/ca-certificates/145042.pem (1708 bytes)
	I1101 01:00:45.361364   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 01:00:45.391023   58823 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 01:00:45.412643   58823 ssh_runner.go:195] Run: openssl version
	I1101 01:00:45.419938   58823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145042.pem && ln -fs /usr/share/ca-certificates/145042.pem /etc/ssl/certs/145042.pem"
	I1101 01:00:45.433972   58823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145042.pem
	I1101 01:00:45.439966   58823 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:54 /usr/share/ca-certificates/145042.pem
	I1101 01:00:45.440070   58823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145042.pem
	I1101 01:00:45.447248   58823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145042.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 01:00:45.461261   58823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 01:00:45.475166   58823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:00:45.481174   58823 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:00:45.481281   58823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:00:45.488190   58823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 01:00:45.502428   58823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14504.pem && ln -fs /usr/share/ca-certificates/14504.pem /etc/ssl/certs/14504.pem"
	I1101 01:00:45.515353   58823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14504.pem
	I1101 01:00:45.520135   58823 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:54 /usr/share/ca-certificates/14504.pem
	I1101 01:00:45.520196   58823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14504.pem
	I1101 01:00:45.525605   58823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14504.pem /etc/ssl/certs/51391683.0"
	I1101 01:00:45.535886   58823 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 01:00:45.540671   58823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 01:00:45.546973   58823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 01:00:45.554439   58823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 01:00:45.562216   58823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 01:00:45.570082   58823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 01:00:45.578073   58823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 01:00:45.586056   58823 kubeadm.go:404] StartCluster: {Name:old-k8s-version-330042 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-330042 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 01:00:45.586202   58823 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 01:00:45.586270   58823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:00:45.632205   58823 cri.go:89] found id: ""
	I1101 01:00:45.632279   58823 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 01:00:45.646397   58823 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1101 01:00:45.646432   58823 kubeadm.go:636] restartCluster start
	I1101 01:00:45.646492   58823 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 01:00:45.660754   58823 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:45.662302   58823 kubeconfig.go:92] found "old-k8s-version-330042" server: "https://192.168.39.90:8443"
	I1101 01:00:45.665617   58823 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 01:00:45.679127   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:45.679203   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:45.697578   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:45.697601   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:45.697662   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:45.715086   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:46.215841   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:46.215939   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:46.233039   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:46.715162   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:46.715283   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:46.727101   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:47.215409   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:47.215512   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:47.228104   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:43.297105   58730 system_pods.go:59] 9 kube-system pods found
	I1101 01:00:43.452043   58730 system_pods.go:61] "coredns-5dd5756b68-9hvh7" [d7d126c2-c270-452c-b939-15303a174742] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 01:00:43.452062   58730 system_pods.go:61] "coredns-5dd5756b68-gptmc" [fbbb9f17-32d6-456d-8171-eadcf64b11a8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 01:00:43.452074   58730 system_pods.go:61] "etcd-embed-certs-754132" [3c7474c1-788e-461d-bd20-e62c3c12cf27] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 01:00:43.452086   58730 system_pods.go:61] "kube-apiserver-embed-certs-754132" [d218a8d6-536c-400a-b81e-325b89ab475b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 01:00:43.452116   58730 system_pods.go:61] "kube-controller-manager-embed-certs-754132" [930b7861-b807-4f24-ba3c-9365a1e8dd8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 01:00:43.452128   58730 system_pods.go:61] "kube-proxy-d5d5x" [c7a6d923-0b37-452b-9979-0a64c05ee737] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 01:00:43.452142   58730 system_pods.go:61] "kube-scheduler-embed-certs-754132" [fd9c0833-f9d4-41cf-b5dd-b676ea5da6ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 01:00:43.452156   58730 system_pods.go:61] "metrics-server-57f55c9bc5-znchz" [60da0fbf-a2c4-4910-b06b-251b33b8ad0b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:00:43.452169   58730 system_pods.go:61] "storage-provisioner" [fbece4fb-6f83-4f17-acb8-94f493dd72e9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 01:00:43.452185   58730 system_pods.go:74] duration metric: took 1.181683794s to wait for pod list to return data ...
	I1101 01:00:43.452198   58730 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:00:44.181694   58730 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:00:44.181739   58730 node_conditions.go:123] node cpu capacity is 2
	I1101 01:00:44.181756   58730 node_conditions.go:105] duration metric: took 729.549671ms to run NodePressure ...
	I1101 01:00:44.181784   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:45.274729   58730 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.092921592s)
	I1101 01:00:45.274761   58730 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1101 01:00:45.285444   58730 kubeadm.go:787] kubelet initialised
	I1101 01:00:45.285478   58730 kubeadm.go:788] duration metric: took 10.704919ms waiting for restarted kubelet to initialise ...
	I1101 01:00:45.285489   58730 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:00:45.303122   58730 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-9hvh7" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:47.333376   58730 pod_ready.go:92] pod "coredns-5dd5756b68-9hvh7" in "kube-system" namespace has status "Ready":"True"
	I1101 01:00:47.333404   58730 pod_ready.go:81] duration metric: took 2.030252648s waiting for pod "coredns-5dd5756b68-9hvh7" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:47.333415   58730 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-gptmc" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:47.339165   58730 pod_ready.go:92] pod "coredns-5dd5756b68-gptmc" in "kube-system" namespace has status "Ready":"True"
	I1101 01:00:47.339189   58730 pod_ready.go:81] duration metric: took 5.76803ms waiting for pod "coredns-5dd5756b68-gptmc" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:47.339201   58730 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:47.656259   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:47.656733   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:47.656767   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:47.656688   59967 retry.go:31] will retry after 3.546373187s: waiting for machine to come up
	I1101 01:00:47.716219   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:47.716302   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:47.729221   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:48.215453   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:48.215562   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:48.230259   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:48.715905   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:48.716035   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:48.729001   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:49.216123   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:49.216217   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:49.232128   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:49.715640   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:49.715708   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:49.729098   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:50.215271   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:50.215379   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:50.228075   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:50.715151   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:50.715256   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:50.726839   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:51.215204   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:51.215293   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:51.227412   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:51.715753   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:51.715870   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:51.728794   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:52.215318   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:52.215437   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:52.227527   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:48.860188   58730 pod_ready.go:92] pod "etcd-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:00:48.860215   58730 pod_ready.go:81] duration metric: took 1.521005544s waiting for pod "etcd-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:48.860228   58730 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:50.286848   58730 pod_ready.go:92] pod "kube-apiserver-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:00:50.286882   58730 pod_ready.go:81] duration metric: took 1.426640629s waiting for pod "kube-apiserver-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:50.286894   58730 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:51.886531   58730 pod_ready.go:92] pod "kube-controller-manager-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:00:51.886555   58730 pod_ready.go:81] duration metric: took 1.599653882s waiting for pod "kube-controller-manager-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:51.886565   58730 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-d5d5x" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:52.079723   58730 pod_ready.go:92] pod "kube-proxy-d5d5x" in "kube-system" namespace has status "Ready":"True"
	I1101 01:00:52.079752   58730 pod_ready.go:81] duration metric: took 193.181169ms waiting for pod "kube-proxy-d5d5x" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:52.079766   58730 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:51.204423   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:51.204909   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:51.204945   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:51.204854   59967 retry.go:31] will retry after 3.382936792s: waiting for machine to come up
	I1101 01:00:54.588976   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.589398   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Found IP for machine: 192.168.72.97
	I1101 01:00:54.589427   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Reserving static IP address...
	I1101 01:00:54.589447   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has current primary IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.589764   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Reserved static IP address: 192.168.72.97
	I1101 01:00:54.589783   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for SSH to be available...
	I1101 01:00:54.589811   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-639310", mac: "52:54:00:83:e0:44", ip: "192.168.72.97"} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:54.589841   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | skip adding static IP to network mk-default-k8s-diff-port-639310 - found existing host DHCP lease matching {name: "default-k8s-diff-port-639310", mac: "52:54:00:83:e0:44", ip: "192.168.72.97"}
	I1101 01:00:54.589858   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | Getting to WaitForSSH function...
	I1101 01:00:54.591920   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.592295   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:54.592327   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.592518   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | Using SSH client type: external
	I1101 01:00:54.592546   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa (-rw-------)
	I1101 01:00:54.592568   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.97 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1101 01:00:54.592581   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | About to run SSH command:
	I1101 01:00:54.592605   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | exit 0
	I1101 01:00:54.687664   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | SSH cmd err, output: <nil>: 
	I1101 01:00:54.688005   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetConfigRaw
	I1101 01:00:54.688653   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetIP
	I1101 01:00:54.691258   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.691761   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:54.691803   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.692096   59148 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/config.json ...
	I1101 01:00:54.692278   59148 machine.go:88] provisioning docker machine ...
	I1101 01:00:54.692297   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:00:54.692554   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetMachineName
	I1101 01:00:54.692765   59148 buildroot.go:166] provisioning hostname "default-k8s-diff-port-639310"
	I1101 01:00:54.692787   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetMachineName
	I1101 01:00:54.692962   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:54.695491   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.695887   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:54.695917   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.696074   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:54.696280   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:54.696477   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:54.696624   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:54.696817   59148 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:54.697275   59148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.97 22 <nil> <nil>}
	I1101 01:00:54.697298   59148 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-639310 && echo "default-k8s-diff-port-639310" | sudo tee /etc/hostname
	I1101 01:00:54.836084   59148 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-639310
	
	I1101 01:00:54.836118   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:54.839109   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.839437   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:54.839463   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.839732   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:54.839986   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:54.840131   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:54.840290   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:54.840501   59148 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:54.840865   59148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.97 22 <nil> <nil>}
	I1101 01:00:54.840885   59148 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-639310' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-639310/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-639310' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 01:00:54.979804   59148 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 01:00:54.979841   59148 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7305/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7305/.minikube}
	I1101 01:00:54.979870   59148 buildroot.go:174] setting up certificates
	I1101 01:00:54.979881   59148 provision.go:83] configureAuth start
	I1101 01:00:54.979898   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetMachineName
	I1101 01:00:54.980246   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetIP
	I1101 01:00:54.983397   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.983760   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:54.983794   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.984029   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:54.986746   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.987112   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:54.987160   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.987328   59148 provision.go:138] copyHostCerts
	I1101 01:00:54.987418   59148 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem, removing ...
	I1101 01:00:54.987437   59148 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 01:00:54.987507   59148 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem (1078 bytes)
	I1101 01:00:54.987619   59148 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem, removing ...
	I1101 01:00:54.987628   59148 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 01:00:54.987651   59148 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem (1123 bytes)
	I1101 01:00:54.987707   59148 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem, removing ...
	I1101 01:00:54.987714   59148 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 01:00:54.987731   59148 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem (1675 bytes)
	I1101 01:00:54.987773   59148 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-639310 san=[192.168.72.97 192.168.72.97 localhost 127.0.0.1 minikube default-k8s-diff-port-639310]
	I1101 01:00:56.081549   58676 start.go:369] acquired machines lock for "no-preload-008483" in 57.600332472s
	I1101 01:00:56.081600   58676 start.go:96] Skipping create...Using existing machine configuration
	I1101 01:00:56.081611   58676 fix.go:54] fixHost starting: 
	I1101 01:00:56.082003   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:00:56.082041   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:00:56.098896   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33091
	I1101 01:00:56.099300   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:00:56.099786   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:00:56.099817   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:00:56.100159   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:00:56.100364   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:00:56.100511   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetState
	I1101 01:00:56.104041   58676 fix.go:102] recreateIfNeeded on no-preload-008483: state=Stopped err=<nil>
	I1101 01:00:56.104071   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	W1101 01:00:56.104250   58676 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 01:00:56.106287   58676 out.go:177] * Restarting existing kvm2 VM for "no-preload-008483" ...
	I1101 01:00:52.715585   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:52.715665   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:52.726877   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:53.216119   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:53.216202   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:53.228700   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:53.715253   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:53.715342   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:53.729029   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:54.215451   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:54.215554   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:54.228217   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:54.715451   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:54.715513   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:54.727356   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:55.216034   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:55.216130   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:55.227905   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:55.680067   58823 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1101 01:00:55.680120   58823 kubeadm.go:1128] stopping kube-system containers ...
	I1101 01:00:55.680135   58823 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 01:00:55.680204   58823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:00:55.726658   58823 cri.go:89] found id: ""
	I1101 01:00:55.726744   58823 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 01:00:55.748477   58823 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:00:55.758933   58823 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:00:55.759013   58823 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:00:55.769130   58823 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1101 01:00:55.769156   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:55.911136   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:57.164062   58823 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.252874473s)
	I1101 01:00:57.164095   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:57.403267   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:55.270327   59148 provision.go:172] copyRemoteCerts
	I1101 01:00:55.270394   59148 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 01:00:55.270418   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:55.272988   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.273410   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:55.273444   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.273609   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:55.273818   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:55.273966   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:55.274113   59148 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa Username:docker}
	I1101 01:00:55.367354   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 01:00:55.391069   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1101 01:00:55.413001   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 01:00:55.436904   59148 provision.go:86] duration metric: configureAuth took 457.006108ms
	I1101 01:00:55.436930   59148 buildroot.go:189] setting minikube options for container-runtime
	I1101 01:00:55.437115   59148 config.go:182] Loaded profile config "default-k8s-diff-port-639310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:00:55.437187   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:55.440286   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.440627   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:55.440662   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.440789   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:55.440989   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:55.441187   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:55.441330   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:55.441491   59148 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:55.441905   59148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.97 22 <nil> <nil>}
	I1101 01:00:55.441928   59148 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 01:00:55.788340   59148 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 01:00:55.788372   59148 machine.go:91] provisioned docker machine in 1.096081387s
	I1101 01:00:55.788386   59148 start.go:300] post-start starting for "default-k8s-diff-port-639310" (driver="kvm2")
	I1101 01:00:55.788401   59148 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 01:00:55.788443   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:00:55.788777   59148 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 01:00:55.788846   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:55.792110   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.792594   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:55.792626   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.792829   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:55.793080   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:55.793273   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:55.793421   59148 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa Username:docker}
	I1101 01:00:55.893108   59148 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 01:00:55.898425   59148 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 01:00:55.898452   59148 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/addons for local assets ...
	I1101 01:00:55.898530   59148 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/files for local assets ...
	I1101 01:00:55.898619   59148 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> 145042.pem in /etc/ssl/certs
	I1101 01:00:55.898751   59148 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 01:00:55.909396   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:00:55.943412   59148 start.go:303] post-start completed in 154.998365ms
	I1101 01:00:55.943440   59148 fix.go:56] fixHost completed within 20.309363198s
	I1101 01:00:55.943464   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:55.946417   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.946777   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:55.946810   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.947048   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:55.947268   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:55.947484   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:55.947662   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:55.947849   59148 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:55.948212   59148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.97 22 <nil> <nil>}
	I1101 01:00:55.948225   59148 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1101 01:00:56.081387   59148 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698800456.033536949
	
	I1101 01:00:56.081411   59148 fix.go:206] guest clock: 1698800456.033536949
	I1101 01:00:56.081422   59148 fix.go:219] Guest: 2023-11-01 01:00:56.033536949 +0000 UTC Remote: 2023-11-01 01:00:55.943445038 +0000 UTC m=+270.963710441 (delta=90.091911ms)
	I1101 01:00:56.081446   59148 fix.go:190] guest clock delta is within tolerance: 90.091911ms
	I1101 01:00:56.081451   59148 start.go:83] releasing machines lock for "default-k8s-diff-port-639310", held for 20.447404197s
	I1101 01:00:56.081484   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:00:56.081826   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetIP
	I1101 01:00:56.084827   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:56.085289   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:56.085326   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:56.085543   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:00:56.086049   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:00:56.086272   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:00:56.086374   59148 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 01:00:56.086425   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:56.086677   59148 ssh_runner.go:195] Run: cat /version.json
	I1101 01:00:56.086709   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:56.089377   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:56.089696   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:56.089784   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:56.089841   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:56.090077   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:56.090088   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:56.090108   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:56.090256   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:56.090329   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:56.090405   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:56.090479   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:56.090557   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:56.090613   59148 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa Username:docker}
	I1101 01:00:56.090681   59148 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa Username:docker}
	I1101 01:00:56.220669   59148 ssh_runner.go:195] Run: systemctl --version
	I1101 01:00:56.226971   59148 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 01:00:56.375845   59148 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 01:00:56.383893   59148 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 01:00:56.383986   59148 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 01:00:56.404009   59148 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 01:00:56.404035   59148 start.go:472] detecting cgroup driver to use...
	I1101 01:00:56.404107   59148 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 01:00:56.420015   59148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 01:00:56.435577   59148 docker.go:204] disabling cri-docker service (if available) ...
	I1101 01:00:56.435652   59148 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 01:00:56.448542   59148 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 01:00:56.465197   59148 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 01:00:56.607142   59148 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 01:00:56.739287   59148 docker.go:220] disabling docker service ...
	I1101 01:00:56.739366   59148 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 01:00:56.753861   59148 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 01:00:56.768891   59148 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 01:00:56.893929   59148 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 01:00:57.022891   59148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 01:00:57.039063   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 01:00:57.058893   59148 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 01:00:57.058964   59148 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:57.070769   59148 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 01:00:57.070845   59148 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:57.082528   59148 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:57.094350   59148 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:57.105953   59148 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 01:00:57.117745   59148 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 01:00:57.128493   59148 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 01:00:57.128553   59148 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 01:00:57.145858   59148 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 01:00:57.157318   59148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 01:00:57.288371   59148 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 01:00:57.489356   59148 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 01:00:57.489458   59148 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 01:00:57.495837   59148 start.go:540] Will wait 60s for crictl version
	I1101 01:00:57.495907   59148 ssh_runner.go:195] Run: which crictl
	I1101 01:00:57.500572   59148 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 01:00:57.546076   59148 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1101 01:00:57.546245   59148 ssh_runner.go:195] Run: crio --version
	I1101 01:00:57.601745   59148 ssh_runner.go:195] Run: crio --version
	I1101 01:00:57.664097   59148 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1101 01:00:54.387561   58730 pod_ready.go:102] pod "kube-scheduler-embed-certs-754132" in "kube-system" namespace has status "Ready":"False"
	I1101 01:00:56.388062   58730 pod_ready.go:92] pod "kube-scheduler-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:00:56.388085   58730 pod_ready.go:81] duration metric: took 4.308312567s waiting for pod "kube-scheduler-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:56.388094   58730 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:57.666096   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetIP
	I1101 01:00:57.670028   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:57.670437   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:57.670472   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:57.670760   59148 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1101 01:00:57.675850   59148 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:00:57.689379   59148 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 01:00:57.689439   59148 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:00:57.736333   59148 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1101 01:00:57.736404   59148 ssh_runner.go:195] Run: which lz4
	I1101 01:00:57.740532   59148 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1101 01:00:57.745488   59148 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 01:00:57.745535   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1101 01:00:59.649981   59148 crio.go:444] Took 1.909486 seconds to copy over tarball
	I1101 01:00:59.650070   59148 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 01:00:56.107642   58676 main.go:141] libmachine: (no-preload-008483) Calling .Start
	I1101 01:00:56.107815   58676 main.go:141] libmachine: (no-preload-008483) Ensuring networks are active...
	I1101 01:00:56.108696   58676 main.go:141] libmachine: (no-preload-008483) Ensuring network default is active
	I1101 01:00:56.109190   58676 main.go:141] libmachine: (no-preload-008483) Ensuring network mk-no-preload-008483 is active
	I1101 01:00:56.109623   58676 main.go:141] libmachine: (no-preload-008483) Getting domain xml...
	I1101 01:00:56.110400   58676 main.go:141] libmachine: (no-preload-008483) Creating domain...
	I1101 01:00:57.626479   58676 main.go:141] libmachine: (no-preload-008483) Waiting to get IP...
	I1101 01:00:57.627653   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:00:57.628245   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:00:57.628315   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:00:57.628210   60142 retry.go:31] will retry after 306.868541ms: waiting for machine to come up
	I1101 01:00:57.936854   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:00:57.937358   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:00:57.937392   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:00:57.937309   60142 retry.go:31] will retry after 366.94808ms: waiting for machine to come up
	I1101 01:00:58.306219   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:00:58.306880   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:00:58.306909   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:00:58.306815   60142 retry.go:31] will retry after 470.784378ms: waiting for machine to come up
	I1101 01:00:58.781164   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:00:58.781784   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:00:58.781810   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:00:58.781686   60142 retry.go:31] will retry after 475.883045ms: waiting for machine to come up
	I1101 01:00:59.259400   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:00:59.259922   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:00:59.259964   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:00:59.259816   60142 retry.go:31] will retry after 533.372113ms: waiting for machine to come up
	I1101 01:00:59.794619   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:00:59.795307   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:00:59.795335   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:00:59.795222   60142 retry.go:31] will retry after 643.335947ms: waiting for machine to come up
	I1101 01:01:00.440339   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:00.440876   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:01:00.440901   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:01:00.440795   60142 retry.go:31] will retry after 899.488876ms: waiting for machine to come up
	I1101 01:00:57.545316   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:57.641733   58823 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:00:57.641812   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:57.655826   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:58.173767   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:58.674113   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:59.174394   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:59.674240   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:59.705758   58823 api_server.go:72] duration metric: took 2.064024888s to wait for apiserver process to appear ...
	I1101 01:00:59.705791   58823 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:00:59.705814   58823 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I1101 01:00:58.517913   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:00.993028   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:03.059373   59148 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.409271602s)
	I1101 01:01:03.059403   59148 crio.go:451] Took 3.409395 seconds to extract the tarball
	I1101 01:01:03.059413   59148 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 01:01:03.101818   59148 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:01:03.153263   59148 crio.go:496] all images are preloaded for cri-o runtime.
	I1101 01:01:03.153284   59148 cache_images.go:84] Images are preloaded, skipping loading
	I1101 01:01:03.153341   59148 ssh_runner.go:195] Run: crio config
	I1101 01:01:03.228205   59148 cni.go:84] Creating CNI manager for ""
	I1101 01:01:03.228225   59148 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:01:03.228241   59148 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 01:01:03.228265   59148 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.97 APIServerPort:8444 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-639310 NodeName:default-k8s-diff-port-639310 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 01:01:03.228386   59148 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.97
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-639310"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 01:01:03.228463   59148 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-639310 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-639310 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1101 01:01:03.228517   59148 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 01:01:03.240926   59148 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 01:01:03.241014   59148 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 01:01:03.253440   59148 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I1101 01:01:03.271480   59148 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 01:01:03.292784   59148 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I1101 01:01:03.315295   59148 ssh_runner.go:195] Run: grep 192.168.72.97	control-plane.minikube.internal$ /etc/hosts
	I1101 01:01:03.319922   59148 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:01:03.332820   59148 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310 for IP: 192.168.72.97
	I1101 01:01:03.332869   59148 certs.go:190] acquiring lock for shared ca certs: {Name:mk6185b3938c4b51e80f57b7c81926c81b632b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:01:03.333015   59148 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key
	I1101 01:01:03.333067   59148 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key
	I1101 01:01:03.333174   59148 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/client.key
	I1101 01:01:03.333255   59148 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/apiserver.key.6d6df538
	I1101 01:01:03.333307   59148 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/proxy-client.key
	I1101 01:01:03.333469   59148 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem (1338 bytes)
	W1101 01:01:03.333531   59148 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504_empty.pem, impossibly tiny 0 bytes
	I1101 01:01:03.333542   59148 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 01:01:03.333580   59148 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem (1078 bytes)
	I1101 01:01:03.333632   59148 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem (1123 bytes)
	I1101 01:01:03.333699   59148 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem (1675 bytes)
	I1101 01:01:03.333761   59148 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:01:03.334633   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 01:01:03.361740   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 01:01:03.387535   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 01:01:03.414252   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 01:01:03.438492   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 01:01:03.463501   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 01:01:03.489800   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 01:01:03.517317   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 01:01:03.543330   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem --> /usr/share/ca-certificates/14504.pem (1338 bytes)
	I1101 01:01:03.567744   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /usr/share/ca-certificates/145042.pem (1708 bytes)
	I1101 01:01:03.594230   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 01:01:03.620857   59148 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 01:01:03.638676   59148 ssh_runner.go:195] Run: openssl version
	I1101 01:01:03.644139   59148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14504.pem && ln -fs /usr/share/ca-certificates/14504.pem /etc/ssl/certs/14504.pem"
	I1101 01:01:03.654667   59148 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14504.pem
	I1101 01:01:03.659261   59148 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:54 /usr/share/ca-certificates/14504.pem
	I1101 01:01:03.659322   59148 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14504.pem
	I1101 01:01:03.664592   59148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14504.pem /etc/ssl/certs/51391683.0"
	I1101 01:01:03.675482   59148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145042.pem && ln -fs /usr/share/ca-certificates/145042.pem /etc/ssl/certs/145042.pem"
	I1101 01:01:03.687903   59148 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145042.pem
	I1101 01:01:03.692901   59148 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:54 /usr/share/ca-certificates/145042.pem
	I1101 01:01:03.692970   59148 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145042.pem
	I1101 01:01:03.698691   59148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145042.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 01:01:03.709971   59148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 01:01:03.720612   59148 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:01:03.725306   59148 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:01:03.725397   59148 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:01:03.731004   59148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 01:01:03.743558   59148 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 01:01:03.748428   59148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 01:01:03.754404   59148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 01:01:03.760210   59148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 01:01:03.765964   59148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 01:01:03.771813   59148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 01:01:03.777659   59148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 01:01:03.783754   59148 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-639310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.3 ClusterName:default-k8s-diff-port-639310 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.97 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 01:01:03.783846   59148 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 01:01:03.783903   59148 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:01:03.823390   59148 cri.go:89] found id: ""
	I1101 01:01:03.823473   59148 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 01:01:03.835317   59148 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1101 01:01:03.835339   59148 kubeadm.go:636] restartCluster start
	I1101 01:01:03.835393   59148 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 01:01:03.845532   59148 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:03.846629   59148 kubeconfig.go:92] found "default-k8s-diff-port-639310" server: "https://192.168.72.97:8444"
	I1101 01:01:03.849176   59148 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 01:01:03.859318   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:03.859387   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:03.871598   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:03.871620   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:03.871682   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:03.882903   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:04.383593   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:04.383684   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:04.398424   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:04.883982   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:04.884095   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:04.901344   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:01.341708   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:01.342186   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:01:01.342216   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:01:01.342138   60142 retry.go:31] will retry after 1.416825478s: waiting for machine to come up
	I1101 01:01:02.760851   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:02.761364   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:01:02.761391   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:01:02.761319   60142 retry.go:31] will retry after 1.783291063s: waiting for machine to come up
	I1101 01:01:04.546179   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:04.546731   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:01:04.546768   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:01:04.546684   60142 retry.go:31] will retry after 1.94150512s: waiting for machine to come up
	I1101 01:01:04.706156   58823 api_server.go:269] stopped: https://192.168.39.90:8443/healthz: Get "https://192.168.39.90:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 01:01:04.706226   58823 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I1101 01:01:05.474195   58823 api_server.go:279] https://192.168.39.90:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 01:01:05.474233   58823 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 01:01:05.975031   58823 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I1101 01:01:05.981753   58823 api_server.go:279] https://192.168.39.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1101 01:01:05.981796   58823 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1101 01:01:06.474331   58823 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I1101 01:01:06.483910   58823 api_server.go:279] https://192.168.39.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1101 01:01:06.483971   58823 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1101 01:01:06.974478   58823 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I1101 01:01:06.983225   58823 api_server.go:279] https://192.168.39.90:8443/healthz returned 200:
	ok
	I1101 01:01:06.992078   58823 api_server.go:141] control plane version: v1.16.0
	I1101 01:01:06.992104   58823 api_server.go:131] duration metric: took 7.286307099s to wait for apiserver health ...
	I1101 01:01:06.992112   58823 cni.go:84] Creating CNI manager for ""
	I1101 01:01:06.992118   58823 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:01:06.994180   58823 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:01:06.995961   58823 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:01:07.007478   58823 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:01:07.025029   58823 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:01:07.036645   58823 system_pods.go:59] 7 kube-system pods found
	I1101 01:01:07.036685   58823 system_pods.go:61] "coredns-5644d7b6d9-swhtm" [5c5eacff-9271-46c5-add0-a3931b67876b] Running
	I1101 01:01:07.036692   58823 system_pods.go:61] "etcd-old-k8s-version-330042" [0b703394-0d1c-419d-8e08-c2c299046293] Running
	I1101 01:01:07.036699   58823 system_pods.go:61] "kube-apiserver-old-k8s-version-330042" [0dcb0028-fa22-4107-afa1-fbdd14b615ab] Running
	I1101 01:01:07.036706   58823 system_pods.go:61] "kube-controller-manager-old-k8s-version-330042" [adc1372e-45e1-4365-a039-c06af715cb24] Running
	I1101 01:01:07.036712   58823 system_pods.go:61] "kube-proxy-h86m8" [6db2c8ff-26f9-4f22-9cbd-2405a81d9128] Running
	I1101 01:01:07.036718   58823 system_pods.go:61] "kube-scheduler-old-k8s-version-330042" [f3f78aa9-fcb1-4b87-b7fa-f86c44e801c0] Running
	I1101 01:01:07.036724   58823 system_pods.go:61] "storage-provisioner" [710e45b8-dab7-4bbc-9ce8-f607db4cb63e] Running
	I1101 01:01:07.036733   58823 system_pods.go:74] duration metric: took 11.681153ms to wait for pod list to return data ...
	I1101 01:01:07.036745   58823 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:01:07.043383   58823 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:01:07.043420   58823 node_conditions.go:123] node cpu capacity is 2
	I1101 01:01:07.043433   58823 node_conditions.go:105] duration metric: took 6.681589ms to run NodePressure ...
	I1101 01:01:07.043454   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:07.419893   58823 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1101 01:01:07.425342   58823 retry.go:31] will retry after 365.112122ms: kubelet not initialised
	I1101 01:01:03.491770   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:05.989935   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:05.383225   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:05.383308   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:05.399889   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:05.884036   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:05.884134   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:05.899867   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:06.383118   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:06.383241   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:06.399285   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:06.883379   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:06.883497   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:06.895160   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:07.383835   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:07.383951   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:07.401766   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:07.883254   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:07.883368   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:07.900024   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:08.383405   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:08.383494   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:08.401659   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:08.883099   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:08.883189   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:08.898348   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:09.383858   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:09.384003   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:09.396380   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:09.884003   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:09.884128   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:09.901031   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:06.489565   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:06.490176   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:01:06.490200   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:01:06.490117   60142 retry.go:31] will retry after 2.694877407s: waiting for machine to come up
	I1101 01:01:09.186086   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:09.186554   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:01:09.186584   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:01:09.186497   60142 retry.go:31] will retry after 2.651563817s: waiting for machine to come up
	I1101 01:01:07.799240   58823 retry.go:31] will retry after 519.025086ms: kubelet not initialised
	I1101 01:01:08.325024   58823 retry.go:31] will retry after 345.44325ms: kubelet not initialised
	I1101 01:01:08.674686   58823 retry.go:31] will retry after 665.113314ms: kubelet not initialised
	I1101 01:01:09.345867   58823 retry.go:31] will retry after 1.421023017s: kubelet not initialised
	I1101 01:01:10.773100   58823 retry.go:31] will retry after 1.15707988s: kubelet not initialised
	I1101 01:01:11.936215   58823 retry.go:31] will retry after 3.290674523s: kubelet not initialised
	I1101 01:01:08.490229   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:10.990967   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:12.991285   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:10.383739   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:10.383800   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:10.398972   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:10.882991   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:10.883089   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:10.897346   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:11.383976   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:11.384059   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:11.396332   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:11.883903   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:11.884020   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:11.897279   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:12.383675   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:12.383786   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:12.399623   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:12.883112   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:12.883191   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:12.895484   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:13.383069   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:13.383181   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:13.395417   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:13.860229   59148 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1101 01:01:13.860262   59148 kubeadm.go:1128] stopping kube-system containers ...
	I1101 01:01:13.860277   59148 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 01:01:13.860360   59148 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:01:13.901712   59148 cri.go:89] found id: ""
	I1101 01:01:13.901809   59148 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 01:01:13.918956   59148 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:01:13.931401   59148 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:01:13.931477   59148 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:01:13.943486   59148 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1101 01:01:13.943512   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:14.077324   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:11.839684   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:11.840140   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:01:11.840169   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:01:11.840105   60142 retry.go:31] will retry after 4.157820096s: waiting for machine to come up
	I1101 01:01:15.233157   58823 retry.go:31] will retry after 3.531336164s: kubelet not initialised
	I1101 01:01:15.490358   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:17.491953   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:16.001208   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.001765   58676 main.go:141] libmachine: (no-preload-008483) Found IP for machine: 192.168.50.140
	I1101 01:01:16.001790   58676 main.go:141] libmachine: (no-preload-008483) Reserving static IP address...
	I1101 01:01:16.001806   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has current primary IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.002298   58676 main.go:141] libmachine: (no-preload-008483) Reserved static IP address: 192.168.50.140
	I1101 01:01:16.002338   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "no-preload-008483", mac: "52:54:00:6c:aa:b5", ip: "192.168.50.140"} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.002357   58676 main.go:141] libmachine: (no-preload-008483) Waiting for SSH to be available...
	I1101 01:01:16.002381   58676 main.go:141] libmachine: (no-preload-008483) DBG | skip adding static IP to network mk-no-preload-008483 - found existing host DHCP lease matching {name: "no-preload-008483", mac: "52:54:00:6c:aa:b5", ip: "192.168.50.140"}
	I1101 01:01:16.002397   58676 main.go:141] libmachine: (no-preload-008483) DBG | Getting to WaitForSSH function...
	I1101 01:01:16.004912   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.005349   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.005387   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.005528   58676 main.go:141] libmachine: (no-preload-008483) DBG | Using SSH client type: external
	I1101 01:01:16.005550   58676 main.go:141] libmachine: (no-preload-008483) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa (-rw-------)
	I1101 01:01:16.005589   58676 main.go:141] libmachine: (no-preload-008483) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.140 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1101 01:01:16.005607   58676 main.go:141] libmachine: (no-preload-008483) DBG | About to run SSH command:
	I1101 01:01:16.005621   58676 main.go:141] libmachine: (no-preload-008483) DBG | exit 0
	I1101 01:01:16.100131   58676 main.go:141] libmachine: (no-preload-008483) DBG | SSH cmd err, output: <nil>: 
	I1101 01:01:16.100576   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetConfigRaw
	I1101 01:01:16.101304   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetIP
	I1101 01:01:16.104212   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.104482   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.104528   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.104710   58676 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/config.json ...
	I1101 01:01:16.104933   58676 machine.go:88] provisioning docker machine ...
	I1101 01:01:16.104951   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:01:16.105159   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetMachineName
	I1101 01:01:16.105351   58676 buildroot.go:166] provisioning hostname "no-preload-008483"
	I1101 01:01:16.105375   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetMachineName
	I1101 01:01:16.105551   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:16.107936   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.108287   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.108333   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.108422   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:16.108594   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:16.108734   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:16.108861   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:16.109041   58676 main.go:141] libmachine: Using SSH client type: native
	I1101 01:01:16.109531   58676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I1101 01:01:16.109557   58676 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-008483 && echo "no-preload-008483" | sudo tee /etc/hostname
	I1101 01:01:16.249893   58676 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-008483
	
	I1101 01:01:16.249924   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:16.253130   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.253531   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.253571   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.253879   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:16.254106   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:16.254304   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:16.254441   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:16.254608   58676 main.go:141] libmachine: Using SSH client type: native
	I1101 01:01:16.254965   58676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I1101 01:01:16.254987   58676 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-008483' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-008483/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-008483' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 01:01:16.386797   58676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 01:01:16.386834   58676 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7305/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7305/.minikube}
	I1101 01:01:16.386862   58676 buildroot.go:174] setting up certificates
	I1101 01:01:16.386870   58676 provision.go:83] configureAuth start
	I1101 01:01:16.386879   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetMachineName
	I1101 01:01:16.387149   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetIP
	I1101 01:01:16.390409   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.390812   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.390844   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.391055   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:16.393580   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.394122   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.394154   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.394352   58676 provision.go:138] copyHostCerts
	I1101 01:01:16.394425   58676 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem, removing ...
	I1101 01:01:16.394438   58676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 01:01:16.394506   58676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem (1078 bytes)
	I1101 01:01:16.394646   58676 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem, removing ...
	I1101 01:01:16.394658   58676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 01:01:16.394690   58676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem (1123 bytes)
	I1101 01:01:16.394774   58676 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem, removing ...
	I1101 01:01:16.394786   58676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 01:01:16.394811   58676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem (1675 bytes)
	I1101 01:01:16.394874   58676 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem org=jenkins.no-preload-008483 san=[192.168.50.140 192.168.50.140 localhost 127.0.0.1 minikube no-preload-008483]
	I1101 01:01:16.593958   58676 provision.go:172] copyRemoteCerts
	I1101 01:01:16.594024   58676 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 01:01:16.594046   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:16.597073   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.597449   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.597484   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.597723   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:16.597956   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:16.598108   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:16.598247   58676 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa Username:docker}
	I1101 01:01:16.689574   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 01:01:16.714820   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1101 01:01:16.744383   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 01:01:16.769305   58676 provision.go:86] duration metric: configureAuth took 382.416455ms
	I1101 01:01:16.769338   58676 buildroot.go:189] setting minikube options for container-runtime
	I1101 01:01:16.769596   58676 config.go:182] Loaded profile config "no-preload-008483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:01:16.769692   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:16.773209   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.773565   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.773628   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.773828   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:16.774071   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:16.774353   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:16.774570   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:16.774772   58676 main.go:141] libmachine: Using SSH client type: native
	I1101 01:01:16.775132   58676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I1101 01:01:16.775150   58676 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 01:01:17.110397   58676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 01:01:17.110481   58676 machine.go:91] provisioned docker machine in 1.005532035s
	I1101 01:01:17.110500   58676 start.go:300] post-start starting for "no-preload-008483" (driver="kvm2")
	I1101 01:01:17.110513   58676 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 01:01:17.110559   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:01:17.110920   58676 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 01:01:17.110948   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:17.114342   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.114794   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:17.114829   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.115028   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:17.115227   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:17.115440   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:17.115621   58676 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa Username:docker}
	I1101 01:01:17.210514   58676 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 01:01:17.216393   58676 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 01:01:17.216423   58676 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/addons for local assets ...
	I1101 01:01:17.216522   58676 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/files for local assets ...
	I1101 01:01:17.216640   58676 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> 145042.pem in /etc/ssl/certs
	I1101 01:01:17.216773   58676 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 01:01:17.229604   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:01:17.255095   58676 start.go:303] post-start completed in 144.577436ms
	I1101 01:01:17.255120   58676 fix.go:56] fixHost completed within 21.173509578s
	I1101 01:01:17.255192   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:17.258433   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.258833   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:17.258858   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.259085   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:17.259305   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:17.259478   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:17.259628   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:17.259825   58676 main.go:141] libmachine: Using SSH client type: native
	I1101 01:01:17.260306   58676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I1101 01:01:17.260321   58676 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1101 01:01:17.389718   58676 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698800477.337229135
	
	I1101 01:01:17.389748   58676 fix.go:206] guest clock: 1698800477.337229135
	I1101 01:01:17.389770   58676 fix.go:219] Guest: 2023-11-01 01:01:17.337229135 +0000 UTC Remote: 2023-11-01 01:01:17.255124581 +0000 UTC m=+361.362725964 (delta=82.104554ms)
	I1101 01:01:17.389797   58676 fix.go:190] guest clock delta is within tolerance: 82.104554ms
	I1101 01:01:17.389804   58676 start.go:83] releasing machines lock for "no-preload-008483", held for 21.308227601s
	I1101 01:01:17.389828   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:01:17.390149   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetIP
	I1101 01:01:17.393289   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.393692   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:17.393723   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.393937   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:01:17.394589   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:01:17.394780   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:01:17.394877   58676 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 01:01:17.394918   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:17.395060   58676 ssh_runner.go:195] Run: cat /version.json
	I1101 01:01:17.395115   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:17.398497   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:17.398497   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.398581   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:17.398642   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.398665   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.398700   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:17.398853   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:17.398861   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:17.398881   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.398995   58676 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa Username:docker}
	I1101 01:01:17.399475   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:17.399644   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:17.399798   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:17.399976   58676 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa Username:docker}
	I1101 01:01:17.524462   58676 ssh_runner.go:195] Run: systemctl --version
	I1101 01:01:17.530328   58676 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 01:01:17.678956   58676 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 01:01:17.686754   58676 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 01:01:17.686834   58676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 01:01:17.705358   58676 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 01:01:17.705388   58676 start.go:472] detecting cgroup driver to use...
	I1101 01:01:17.705527   58676 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 01:01:17.722410   58676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 01:01:17.739380   58676 docker.go:204] disabling cri-docker service (if available) ...
	I1101 01:01:17.739443   58676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 01:01:17.755953   58676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 01:01:17.769672   58676 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 01:01:17.900801   58676 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 01:01:18.027283   58676 docker.go:220] disabling docker service ...
	I1101 01:01:18.027378   58676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 01:01:18.041230   58676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 01:01:18.052784   58676 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 01:01:18.165341   58676 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 01:01:18.276403   58676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 01:01:18.289618   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 01:01:18.308480   58676 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 01:01:18.308562   58676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:01:18.318597   58676 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 01:01:18.318673   58676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:01:18.328312   58676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:01:18.340054   58676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:01:18.351854   58676 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 01:01:18.364129   58676 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 01:01:18.372789   58676 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 01:01:18.372879   58676 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 01:01:18.385792   58676 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 01:01:18.394803   58676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 01:01:18.503941   58676 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 01:01:18.687034   58676 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 01:01:18.687137   58676 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 01:01:18.691750   58676 start.go:540] Will wait 60s for crictl version
	I1101 01:01:18.691818   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:18.695752   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 01:01:18.735012   58676 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1101 01:01:18.735098   58676 ssh_runner.go:195] Run: crio --version
	I1101 01:01:18.782835   58676 ssh_runner.go:195] Run: crio --version
	I1101 01:01:18.829727   58676 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1101 01:01:15.054547   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:15.248625   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:15.325492   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:15.396782   59148 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:01:15.396854   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:15.420220   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:15.941271   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:16.441997   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:16.942240   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:17.441850   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:17.941784   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:17.965191   59148 api_server.go:72] duration metric: took 2.5684081s to wait for apiserver process to appear ...
	I1101 01:01:17.965220   59148 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:01:17.965238   59148 api_server.go:253] Checking apiserver healthz at https://192.168.72.97:8444/healthz ...
	I1101 01:01:18.831303   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetIP
	I1101 01:01:18.834574   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:18.834969   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:18.835003   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:18.835233   58676 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1101 01:01:18.839259   58676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:01:18.853665   58676 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 01:01:18.853725   58676 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:01:18.890995   58676 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1101 01:01:18.891024   58676 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.3 registry.k8s.io/kube-controller-manager:v1.28.3 registry.k8s.io/kube-scheduler:v1.28.3 registry.k8s.io/kube-proxy:v1.28.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1101 01:01:18.891130   58676 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1101 01:01:18.891143   58676 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.3
	I1101 01:01:18.891144   58676 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1101 01:01:18.891201   58676 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1101 01:01:18.891263   58676 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.3
	I1101 01:01:18.891397   58676 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.3
	I1101 01:01:18.891415   58676 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1101 01:01:18.891134   58676 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:01:18.892729   58676 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.3
	I1101 01:01:18.892742   58676 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:01:18.892747   58676 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1101 01:01:18.892760   58676 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1101 01:01:18.892760   58676 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1101 01:01:18.892729   58676 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.3
	I1101 01:01:18.892790   58676 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.3
	I1101 01:01:18.892835   58676 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1101 01:01:19.112836   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1101 01:01:19.131170   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.3
	I1101 01:01:19.147328   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.3
	I1101 01:01:19.148513   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I1101 01:01:19.155909   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.3
	I1101 01:01:19.163598   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.3
	I1101 01:01:19.166436   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I1101 01:01:19.290823   58676 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.3" needs transfer: "registry.k8s.io/kube-proxy:v1.28.3" does not exist at hash "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf" in container runtime
	I1101 01:01:19.290888   58676 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.3
	I1101 01:01:19.290943   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:19.331622   58676 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.3" does not exist at hash "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076" in container runtime
	I1101 01:01:19.331709   58676 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.3" does not exist at hash "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4" in container runtime
	I1101 01:01:19.331776   58676 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.3
	I1101 01:01:19.331717   58676 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.3
	I1101 01:01:19.331872   58676 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.3" does not exist at hash "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3" in container runtime
	I1101 01:01:19.331899   58676 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1101 01:01:19.331905   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:19.331645   58676 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I1101 01:01:19.331979   58676 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I1101 01:01:19.331986   58676 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I1101 01:01:19.332011   58676 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I1101 01:01:19.332023   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:19.331945   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:19.332053   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:19.332040   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.3
	I1101 01:01:19.331842   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:19.342099   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.3
	I1101 01:01:19.396521   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I1101 01:01:19.396603   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.3
	I1101 01:01:19.396612   58676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3
	I1101 01:01:19.396628   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.3
	I1101 01:01:19.396681   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I1101 01:01:19.396700   58676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.3
	I1101 01:01:19.396750   58676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3
	I1101 01:01:19.396842   58676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1101 01:01:19.497732   58676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.3 (exists)
	I1101 01:01:19.497756   58676 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.3
	I1101 01:01:19.497784   58676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I1101 01:01:19.497813   58676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3
	I1101 01:01:19.497871   58676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I1101 01:01:19.497924   58676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3
	I1101 01:01:19.497964   58676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.3 (exists)
	I1101 01:01:19.498009   58676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1101 01:01:19.498015   58676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3
	I1101 01:01:19.498054   58676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I1101 01:01:19.498111   58676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I1101 01:01:19.498117   58676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1101 01:01:19.764214   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:01:18.769797   58823 retry.go:31] will retry after 5.956460089s: kubelet not initialised
	I1101 01:01:19.987384   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:21.989585   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:22.277798   59148 api_server.go:279] https://192.168.72.97:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 01:01:22.277829   59148 api_server.go:103] status: https://192.168.72.97:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 01:01:22.277839   59148 api_server.go:253] Checking apiserver healthz at https://192.168.72.97:8444/healthz ...
	I1101 01:01:22.371756   59148 api_server.go:279] https://192.168.72.97:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 01:01:22.371796   59148 api_server.go:103] status: https://192.168.72.97:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 01:01:22.872332   59148 api_server.go:253] Checking apiserver healthz at https://192.168.72.97:8444/healthz ...
	I1101 01:01:22.884543   59148 api_server.go:279] https://192.168.72.97:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:01:22.884587   59148 api_server.go:103] status: https://192.168.72.97:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:01:23.372033   59148 api_server.go:253] Checking apiserver healthz at https://192.168.72.97:8444/healthz ...
	I1101 01:01:23.381608   59148 api_server.go:279] https://192.168.72.97:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:01:23.381657   59148 api_server.go:103] status: https://192.168.72.97:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:01:23.872319   59148 api_server.go:253] Checking apiserver healthz at https://192.168.72.97:8444/healthz ...
	I1101 01:01:23.879515   59148 api_server.go:279] https://192.168.72.97:8444/healthz returned 200:
	ok
	I1101 01:01:23.892376   59148 api_server.go:141] control plane version: v1.28.3
	I1101 01:01:23.892412   59148 api_server.go:131] duration metric: took 5.927178892s to wait for apiserver health ...
	I1101 01:01:23.892424   59148 cni.go:84] Creating CNI manager for ""
	I1101 01:01:23.892433   59148 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:01:23.894577   59148 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:01:23.896163   59148 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:01:23.928482   59148 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:01:23.952485   59148 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:01:23.968054   59148 system_pods.go:59] 8 kube-system pods found
	I1101 01:01:23.968095   59148 system_pods.go:61] "coredns-5dd5756b68-lmxx8" [c74c5ddc-56a8-422c-a140-1fdd14ef817d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 01:01:23.968115   59148 system_pods.go:61] "etcd-default-k8s-diff-port-639310" [1baf2571-f6c6-43bc-8051-e72f7eb4ed70] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 01:01:23.968126   59148 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-639310" [9cbc66c6-7c66-4b24-9400-a5add2edec14] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 01:01:23.968145   59148 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-639310" [99945be6-6fb8-4da6-8c6a-c25a2023d2d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 01:01:23.968158   59148 system_pods.go:61] "kube-proxy-f45wg" [abe74c94-5140-4c35-a141-d995652948e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 01:01:23.968167   59148 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-639310" [299c1962-1945-4525-90c7-384d515dc4e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 01:01:23.968188   59148 system_pods.go:61] "metrics-server-57f55c9bc5-6szl7" [1e00ef03-d5f4-4e8b-a247-8c31a5492f9e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:01:23.968201   59148 system_pods.go:61] "storage-provisioner" [fe2e7631-0564-44d2-afbd-578fb37f6a04] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 01:01:23.968215   59148 system_pods.go:74] duration metric: took 15.694719ms to wait for pod list to return data ...
	I1101 01:01:23.968224   59148 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:01:23.972141   59148 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:01:23.972177   59148 node_conditions.go:123] node cpu capacity is 2
	I1101 01:01:23.972191   59148 node_conditions.go:105] duration metric: took 3.96106ms to run NodePressure ...
	I1101 01:01:23.972214   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:24.253558   59148 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1101 01:01:24.258842   59148 kubeadm.go:787] kubelet initialised
	I1101 01:01:24.258869   59148 kubeadm.go:788] duration metric: took 5.283339ms waiting for restarted kubelet to initialise ...
	I1101 01:01:24.258878   59148 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:01:24.265507   59148 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-lmxx8" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:24.271381   59148 pod_ready.go:97] node "default-k8s-diff-port-639310" hosting pod "coredns-5dd5756b68-lmxx8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.271408   59148 pod_ready.go:81] duration metric: took 5.876802ms waiting for pod "coredns-5dd5756b68-lmxx8" in "kube-system" namespace to be "Ready" ...
	E1101 01:01:24.271418   59148 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-639310" hosting pod "coredns-5dd5756b68-lmxx8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.271426   59148 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:24.277446   59148 pod_ready.go:97] node "default-k8s-diff-port-639310" hosting pod "etcd-default-k8s-diff-port-639310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.277476   59148 pod_ready.go:81] duration metric: took 6.04229ms waiting for pod "etcd-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	E1101 01:01:24.277487   59148 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-639310" hosting pod "etcd-default-k8s-diff-port-639310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.277495   59148 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:24.283557   59148 pod_ready.go:97] node "default-k8s-diff-port-639310" hosting pod "kube-apiserver-default-k8s-diff-port-639310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.283604   59148 pod_ready.go:81] duration metric: took 6.094277ms waiting for pod "kube-apiserver-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	E1101 01:01:24.283617   59148 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-639310" hosting pod "kube-apiserver-default-k8s-diff-port-639310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.283630   59148 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:24.357249   59148 pod_ready.go:97] node "default-k8s-diff-port-639310" hosting pod "kube-controller-manager-default-k8s-diff-port-639310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.357288   59148 pod_ready.go:81] duration metric: took 73.64295ms waiting for pod "kube-controller-manager-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	E1101 01:01:24.357302   59148 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-639310" hosting pod "kube-controller-manager-default-k8s-diff-port-639310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.357319   59148 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f45wg" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:21.457919   58676 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0: (1.960002941s)
	I1101 01:01:21.457955   58676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I1101 01:01:21.458111   58676 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.3: (1.960074529s)
	I1101 01:01:21.458140   58676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.3 (exists)
	I1101 01:01:21.458152   58676 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.3: (1.960014372s)
	I1101 01:01:21.458176   58676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.3 (exists)
	I1101 01:01:21.458226   58676 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1: (1.960094366s)
	I1101 01:01:21.458252   58676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I1101 01:01:21.458267   58676 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.694021872s)
	I1101 01:01:21.458306   58676 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1101 01:01:21.458344   58676 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:01:21.458392   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:21.458644   58676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3: (1.960815533s)
	I1101 01:01:21.458659   58676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3 from cache
	I1101 01:01:21.458686   58676 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1101 01:01:21.458718   58676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1101 01:01:21.462463   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:01:23.757842   58676 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.295346464s)
	I1101 01:01:23.757911   58676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1101 01:01:23.757849   58676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3: (2.299099605s)
	I1101 01:01:23.757965   58676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3 from cache
	I1101 01:01:23.758006   58676 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I1101 01:01:23.758025   58676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1101 01:01:23.758040   58676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I1101 01:01:23.764726   58676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1101 01:01:24.732471   58823 retry.go:31] will retry after 9.584941607s: kubelet not initialised
	I1101 01:01:23.990727   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:26.489463   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:25.156181   59148 pod_ready.go:92] pod "kube-proxy-f45wg" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:25.156211   59148 pod_ready.go:81] duration metric: took 798.883976ms waiting for pod "kube-proxy-f45wg" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:25.156225   59148 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:27.476794   59148 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-639310" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:29.974327   59148 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-639310" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:29.974364   59148 pod_ready.go:81] duration metric: took 4.818128166s waiting for pod "kube-scheduler-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:29.974381   59148 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:28.990433   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:30.991378   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:32.004594   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:34.006695   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:31.399348   58676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (7.641283444s)
	I1101 01:01:31.399378   58676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I1101 01:01:31.399412   58676 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1101 01:01:31.399465   58676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1101 01:01:33.857323   58676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3: (2.45781579s)
	I1101 01:01:33.857356   58676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3 from cache
	I1101 01:01:33.857384   58676 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1101 01:01:33.857444   58676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1101 01:01:34.322788   58823 retry.go:31] will retry after 7.673111332s: kubelet not initialised
	I1101 01:01:33.488934   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:35.489417   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:37.989455   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:36.506432   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:39.004133   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:36.328716   58676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3: (2.471243195s)
	I1101 01:01:36.328755   58676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3 from cache
	I1101 01:01:36.328788   58676 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I1101 01:01:36.328839   58676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I1101 01:01:37.691820   58676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.362944664s)
	I1101 01:01:37.691851   58676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I1101 01:01:37.691877   58676 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1101 01:01:37.691978   58676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1101 01:01:38.442125   58676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1101 01:01:38.442181   58676 cache_images.go:123] Successfully loaded all cached images
	I1101 01:01:38.442188   58676 cache_images.go:92] LoadImages completed in 19.55115042s
	I1101 01:01:38.442260   58676 ssh_runner.go:195] Run: crio config
	I1101 01:01:38.499778   58676 cni.go:84] Creating CNI manager for ""
	I1101 01:01:38.499799   58676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:01:38.499820   58676 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 01:01:38.499846   58676 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.140 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-008483 NodeName:no-preload-008483 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.140"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.140 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 01:01:38.500007   58676 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.140
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-008483"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.140
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.140"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 01:01:38.500076   58676 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-008483 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:no-preload-008483 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 01:01:38.500135   58676 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 01:01:38.510073   58676 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 01:01:38.510160   58676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 01:01:38.517853   58676 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1101 01:01:38.534085   58676 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 01:01:38.549312   58676 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I1101 01:01:38.566320   58676 ssh_runner.go:195] Run: grep 192.168.50.140	control-plane.minikube.internal$ /etc/hosts
	I1101 01:01:38.569923   58676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.140	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:01:38.582147   58676 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483 for IP: 192.168.50.140
	I1101 01:01:38.582180   58676 certs.go:190] acquiring lock for shared ca certs: {Name:mk6185b3938c4b51e80f57b7c81926c81b632b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:01:38.582353   58676 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key
	I1101 01:01:38.582412   58676 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key
	I1101 01:01:38.582512   58676 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/client.key
	I1101 01:01:38.582596   58676 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/apiserver.key.306fa7af
	I1101 01:01:38.582664   58676 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/proxy-client.key
	I1101 01:01:38.582841   58676 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem (1338 bytes)
	W1101 01:01:38.582887   58676 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504_empty.pem, impossibly tiny 0 bytes
	I1101 01:01:38.582903   58676 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 01:01:38.582941   58676 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem (1078 bytes)
	I1101 01:01:38.582978   58676 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem (1123 bytes)
	I1101 01:01:38.583015   58676 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem (1675 bytes)
	I1101 01:01:38.583082   58676 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:01:38.583827   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 01:01:38.607306   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 01:01:38.631666   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 01:01:38.655201   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 01:01:38.678237   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 01:01:38.700410   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 01:01:38.726807   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 01:01:38.752672   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 01:01:38.776285   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 01:01:38.799902   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem --> /usr/share/ca-certificates/14504.pem (1338 bytes)
	I1101 01:01:38.823790   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /usr/share/ca-certificates/145042.pem (1708 bytes)
	I1101 01:01:38.847407   58676 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 01:01:38.863594   58676 ssh_runner.go:195] Run: openssl version
	I1101 01:01:38.869214   58676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14504.pem && ln -fs /usr/share/ca-certificates/14504.pem /etc/ssl/certs/14504.pem"
	I1101 01:01:38.878725   58676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14504.pem
	I1101 01:01:38.883007   58676 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:54 /usr/share/ca-certificates/14504.pem
	I1101 01:01:38.883069   58676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14504.pem
	I1101 01:01:38.888251   58676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14504.pem /etc/ssl/certs/51391683.0"
	I1101 01:01:38.899894   58676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145042.pem && ln -fs /usr/share/ca-certificates/145042.pem /etc/ssl/certs/145042.pem"
	I1101 01:01:38.909658   58676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145042.pem
	I1101 01:01:38.914011   58676 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:54 /usr/share/ca-certificates/145042.pem
	I1101 01:01:38.914088   58676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145042.pem
	I1101 01:01:38.919323   58676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145042.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 01:01:38.928836   58676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 01:01:38.937988   58676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:01:38.943540   58676 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:01:38.943607   58676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:01:38.949543   58676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 01:01:38.959098   58676 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 01:01:38.963149   58676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 01:01:38.968868   58676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 01:01:38.974315   58676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 01:01:38.979746   58676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 01:01:38.985852   58676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 01:01:38.991864   58676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 01:01:38.998153   58676 kubeadm.go:404] StartCluster: {Name:no-preload-008483 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:no-preload-008483 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.140 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 01:01:38.998271   58676 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 01:01:38.998340   58676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:01:39.045797   58676 cri.go:89] found id: ""
	I1101 01:01:39.045870   58676 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 01:01:39.056166   58676 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1101 01:01:39.056197   58676 kubeadm.go:636] restartCluster start
	I1101 01:01:39.056252   58676 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 01:01:39.065191   58676 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:39.066337   58676 kubeconfig.go:92] found "no-preload-008483" server: "https://192.168.50.140:8443"
	I1101 01:01:39.068843   58676 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 01:01:39.077558   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:39.077606   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:39.088105   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:39.088123   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:39.088168   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:39.100203   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:39.600957   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:39.601029   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:39.612652   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:40.101101   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:40.101191   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:40.113249   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:40.600487   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:40.600552   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:40.612183   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:42.002176   58823 kubeadm.go:787] kubelet initialised
	I1101 01:01:42.002198   58823 kubeadm.go:788] duration metric: took 34.582278796s waiting for restarted kubelet to initialise ...
	I1101 01:01:42.002211   58823 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:01:42.007691   58823 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-m8mn8" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.012657   58823 pod_ready.go:92] pod "coredns-5644d7b6d9-m8mn8" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:42.012677   58823 pod_ready.go:81] duration metric: took 4.961011ms waiting for pod "coredns-5644d7b6d9-m8mn8" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.012687   58823 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-swhtm" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.017099   58823 pod_ready.go:92] pod "coredns-5644d7b6d9-swhtm" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:42.017124   58823 pod_ready.go:81] duration metric: took 4.429709ms waiting for pod "coredns-5644d7b6d9-swhtm" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.017137   58823 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.021376   58823 pod_ready.go:92] pod "etcd-old-k8s-version-330042" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:42.021403   58823 pod_ready.go:81] duration metric: took 4.25772ms waiting for pod "etcd-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.021415   58823 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.026057   58823 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-330042" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:42.026080   58823 pod_ready.go:81] duration metric: took 4.65685ms waiting for pod "kube-apiserver-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.026096   58823 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.401057   58823 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-330042" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:42.401085   58823 pod_ready.go:81] duration metric: took 374.980275ms waiting for pod "kube-controller-manager-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.401099   58823 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-h86m8" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:40.487876   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:42.488609   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:41.504485   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:44.005180   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:41.100662   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:41.100773   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:41.113339   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:41.601121   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:41.601195   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:41.613986   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:42.101110   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:42.101188   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:42.113963   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:42.600356   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:42.600458   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:42.612154   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:43.100679   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:43.100767   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:43.113009   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:43.601328   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:43.601402   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:43.612862   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:44.101146   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:44.101261   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:44.113407   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:44.600812   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:44.600955   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:44.613161   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:45.100665   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:45.100769   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:45.112905   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:45.600416   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:45.600515   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:45.612930   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:42.801878   58823 pod_ready.go:92] pod "kube-proxy-h86m8" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:42.801899   58823 pod_ready.go:81] duration metric: took 400.793617ms waiting for pod "kube-proxy-h86m8" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.801907   58823 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:43.201586   58823 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-330042" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:43.201618   58823 pod_ready.go:81] duration metric: took 399.702904ms waiting for pod "kube-scheduler-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:43.201632   58823 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:45.508037   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:44.489092   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:46.493162   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:46.506251   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:49.004539   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:46.100957   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:46.101023   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:46.113645   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:46.600681   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:46.600781   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:46.612564   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:47.101090   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:47.101156   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:47.113500   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:47.601105   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:47.601244   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:47.613091   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:48.100608   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:48.100725   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:48.112995   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:48.600520   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:48.600603   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:48.612240   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:49.077973   58676 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1101 01:01:49.078017   58676 kubeadm.go:1128] stopping kube-system containers ...
	I1101 01:01:49.078031   58676 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 01:01:49.078097   58676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:01:49.117615   58676 cri.go:89] found id: ""
	I1101 01:01:49.117689   58676 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 01:01:49.133583   58676 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:01:49.142851   58676 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:01:49.142922   58676 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:01:49.151952   58676 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1101 01:01:49.151973   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:49.270827   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:50.046638   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:50.252510   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:50.327660   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:50.398419   58676 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:01:50.398511   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:50.415262   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:50.931672   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:47.508466   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:49.509032   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:51.510816   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:48.987561   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:50.989519   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:52.989978   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:51.004704   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:53.006138   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:51.431168   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:51.931127   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:52.431292   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:52.462617   58676 api_server.go:72] duration metric: took 2.064198698s to wait for apiserver process to appear ...
	I1101 01:01:52.462644   58676 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:01:52.462658   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:52.463297   58676 api_server.go:269] stopped: https://192.168.50.140:8443/healthz: Get "https://192.168.50.140:8443/healthz": dial tcp 192.168.50.140:8443: connect: connection refused
	I1101 01:01:52.463360   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:52.463831   58676 api_server.go:269] stopped: https://192.168.50.140:8443/healthz: Get "https://192.168.50.140:8443/healthz": dial tcp 192.168.50.140:8443: connect: connection refused
	I1101 01:01:52.964290   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:54.007720   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:56.012280   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:56.353340   58676 api_server.go:279] https://192.168.50.140:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 01:01:56.353399   58676 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 01:01:56.353416   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:56.404133   58676 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:01:56.404176   58676 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:01:56.464272   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:56.470496   58676 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:01:56.470553   58676 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:01:56.964058   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:56.975831   58676 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:01:56.975877   58676 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:01:57.464038   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:57.472652   58676 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:01:57.472697   58676 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:01:57.964020   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:57.970866   58676 api_server.go:279] https://192.168.50.140:8443/healthz returned 200:
	ok
	I1101 01:01:57.979612   58676 api_server.go:141] control plane version: v1.28.3
	I1101 01:01:57.979641   58676 api_server.go:131] duration metric: took 5.516990946s to wait for apiserver health ...
	I1101 01:01:57.979650   58676 cni.go:84] Creating CNI manager for ""
	I1101 01:01:57.979657   58676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:01:57.981694   58676 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:01:54.990377   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:57.489817   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:55.505767   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:57.505977   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:00.004800   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:57.983198   58676 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:01:58.006916   58676 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:01:58.035969   58676 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:01:58.047783   58676 system_pods.go:59] 8 kube-system pods found
	I1101 01:01:58.047833   58676 system_pods.go:61] "coredns-5dd5756b68-kcjf2" [e5cba8fe-f5c0-48cd-a21b-649caf4405cd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 01:01:58.047848   58676 system_pods.go:61] "etcd-no-preload-008483" [6e8ce64d-5c27-4528-9ecb-4bd1c2ab55c9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 01:01:58.047868   58676 system_pods.go:61] "kube-apiserver-no-preload-008483" [c320b03e-f364-4b38-8f09-5239d66f90e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 01:01:58.047881   58676 system_pods.go:61] "kube-controller-manager-no-preload-008483" [b89beee3-61e6-4efa-926f-43ae6a50e44b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 01:01:58.047893   58676 system_pods.go:61] "kube-proxy-xjfsj" [a7195683-b9ee-440c-82e6-efcd325a35e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 01:01:58.047907   58676 system_pods.go:61] "kube-scheduler-no-preload-008483" [d8c6a1f5-ceca-46af-9a40-22053f5387b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 01:01:58.047920   58676 system_pods.go:61] "metrics-server-57f55c9bc5-49wtw" [b87d5491-9981-48d5-9cf8-34dbd4b24435] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:01:58.047946   58676 system_pods.go:61] "storage-provisioner" [bf9d5910-ae5f-48f9-9358-54b2068c2e2c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 01:01:58.047959   58676 system_pods.go:74] duration metric: took 11.96541ms to wait for pod list to return data ...
	I1101 01:01:58.047971   58676 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:01:58.052170   58676 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:01:58.052205   58676 node_conditions.go:123] node cpu capacity is 2
	I1101 01:01:58.052218   58676 node_conditions.go:105] duration metric: took 4.239786ms to run NodePressure ...
	I1101 01:01:58.052237   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:58.340580   58676 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1101 01:01:58.351480   58676 kubeadm.go:787] kubelet initialised
	I1101 01:01:58.351510   58676 kubeadm.go:788] duration metric: took 10.903426ms waiting for restarted kubelet to initialise ...
	I1101 01:01:58.351520   58676 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:01:58.359099   58676 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-kcjf2" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:00.383123   58676 pod_ready.go:102] pod "coredns-5dd5756b68-kcjf2" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:58.509858   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:01.009429   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:59.988392   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:01.989042   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:02.505009   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:05.004485   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:02.880623   58676 pod_ready.go:102] pod "coredns-5dd5756b68-kcjf2" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:04.878534   58676 pod_ready.go:92] pod "coredns-5dd5756b68-kcjf2" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:04.878556   58676 pod_ready.go:81] duration metric: took 6.519426334s waiting for pod "coredns-5dd5756b68-kcjf2" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:04.878565   58676 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:03.508377   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:05.508570   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:03.990099   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:06.488196   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:07.005182   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:09.505205   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:06.907992   58676 pod_ready.go:102] pod "etcd-no-preload-008483" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:09.400005   58676 pod_ready.go:102] pod "etcd-no-preload-008483" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:09.900354   58676 pod_ready.go:92] pod "etcd-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:09.900379   58676 pod_ready.go:81] duration metric: took 5.021808339s waiting for pod "etcd-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.900394   58676 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.906496   58676 pod_ready.go:92] pod "kube-apiserver-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:09.906520   58676 pod_ready.go:81] duration metric: took 6.117499ms waiting for pod "kube-apiserver-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.906532   58676 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.911830   58676 pod_ready.go:92] pod "kube-controller-manager-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:09.911850   58676 pod_ready.go:81] duration metric: took 5.311751ms waiting for pod "kube-controller-manager-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.911859   58676 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xjfsj" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.916419   58676 pod_ready.go:92] pod "kube-proxy-xjfsj" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:09.916442   58676 pod_ready.go:81] duration metric: took 4.576855ms waiting for pod "kube-proxy-xjfsj" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.916454   58676 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.921501   58676 pod_ready.go:92] pod "kube-scheduler-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:09.921525   58676 pod_ready.go:81] duration metric: took 5.064522ms waiting for pod "kube-scheduler-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.921536   58676 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:07.514883   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:10.008399   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:08.490011   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:10.988504   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:12.989076   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:11.507014   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:13.509053   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:12.205003   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:14.705621   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:12.509113   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:15.009543   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:15.487844   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:17.488178   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:16.003423   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:18.003597   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:20.004472   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:17.205434   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:19.214743   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:17.508997   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:20.008838   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:22.009023   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:19.488902   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:21.988210   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:22.004908   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:24.503394   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:21.704199   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:23.704855   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:25.705319   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:24.508980   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:27.008249   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:23.988985   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:26.489079   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:26.504752   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:28.505579   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:27.709065   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:30.205608   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:29.507299   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:31.509017   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:28.988567   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:31.488567   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:30.507770   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:33.005199   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:32.707783   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:35.206392   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:34.007977   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:36.008250   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:33.988120   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:36.489908   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:35.503482   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:37.504132   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:39.504348   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:37.704511   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:39.705791   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:38.008778   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:40.509040   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:38.987615   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:40.988646   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:42.005253   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:44.008492   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:42.206082   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:44.704875   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:43.009095   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:45.508557   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:43.489792   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:45.987971   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:47.989322   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:46.504096   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:49.004605   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:47.205736   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:49.704264   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:47.510014   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:50.009950   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:50.489334   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:52.987877   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:51.005543   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:53.504243   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:52.205173   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:54.704843   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:52.509247   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:55.009346   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:55.488330   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:57.987845   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:55.504494   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:58.003674   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:00.004598   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:57.205092   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:59.705637   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:57.522422   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:00.007902   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:02.009964   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:59.987956   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:01.989730   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:02.005953   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:04.007095   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:02.205761   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:04.704065   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:04.508531   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:06.512303   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:04.487667   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:06.487854   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:06.503630   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:08.504993   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:06.704568   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:08.705012   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:09.008519   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:11.509450   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:08.488843   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:10.987614   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:12.989824   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:10.505932   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:13.005799   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:11.203683   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:13.204241   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:15.705287   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:14.008244   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:16.009433   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:15.488278   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:17.988683   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:15.503739   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:17.506253   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:20.004613   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:18.204056   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:20.205312   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:18.009706   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:20.508744   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:20.490044   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:22.989002   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:22.504922   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:25.004156   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:22.704711   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:25.205072   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:23.008359   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:25.509196   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:25.487961   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:27.488324   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:27.008179   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:29.504182   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:27.205671   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:29.208402   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:27.509247   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:30.008627   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:29.988286   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:32.487504   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:31.504973   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:34.004168   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:31.704298   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:33.704452   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:32.507959   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:35.008631   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:37.009271   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:34.488458   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:36.488759   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:36.503146   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:38.504444   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:36.204750   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:38.705346   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:39.507406   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:41.509812   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:38.988439   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:41.491496   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:40.505301   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:42.506003   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:45.004872   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:41.204015   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:43.206055   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:45.705597   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:44.008441   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:46.009900   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:43.987813   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:45.988508   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:47.989201   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:47.505799   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:49.506424   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:48.204686   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:50.704155   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:48.511303   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:51.008360   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:50.488123   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:52.488356   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:52.004387   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:54.505016   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:52.705891   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:54.706732   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:53.008988   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:55.507791   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:54.988620   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:56.990186   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:57.005565   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:59.505220   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:57.205342   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:59.215160   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:57.508013   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:59.509883   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:01.510115   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:59.490512   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:01.988008   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:02.004869   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:04.503903   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:01.704963   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:04.204466   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:04.007146   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:06.007815   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:04.488270   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:06.987544   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:06.505818   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:09.006093   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:06.205560   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:08.703961   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:10.705037   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:08.008817   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:10.508585   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:08.988223   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:10.989742   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:12.990669   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:11.503914   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:13.504018   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:13.206290   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:15.704820   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:13.008696   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:15.010312   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:15.487596   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:17.489381   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:15.505665   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:18.004825   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:20.004966   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:18.205022   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:20.703582   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:17.508842   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:20.008489   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:22.008572   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:19.988378   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:22.490000   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:22.005055   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:24.504050   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:22.704263   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:24.704479   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:24.507893   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:27.009371   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:24.988500   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:27.490306   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:26.504850   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:29.003907   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:27.204442   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:29.204906   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:29.508234   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:31.508285   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:29.988549   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:32.490618   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:31.504800   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:33.506025   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:31.704974   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:34.204565   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:33.512784   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:36.009709   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:34.988579   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:37.491535   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:36.011080   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:38.503881   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:36.204772   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:38.205329   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:40.707128   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:38.509404   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:41.009915   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:39.988897   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:42.487751   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:40.504606   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:42.504912   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:44.505101   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:43.205005   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:45.207096   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:43.507714   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:45.508866   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:44.988852   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:47.488268   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:47.004069   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:49.005029   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:47.704762   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:49.705584   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:48.009495   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:50.508392   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:49.488880   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:51.988841   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:51.504680   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:54.010010   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:52.204554   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:54.705101   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:53.008194   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:55.008373   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:57.009351   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:54.489702   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:56.389066   58730 pod_ready.go:81] duration metric: took 4m0.000951404s waiting for pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace to be "Ready" ...
	E1101 01:04:56.389116   58730 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1101 01:04:56.389139   58730 pod_ready.go:38] duration metric: took 4m11.103640013s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:04:56.389173   58730 kubeadm.go:640] restartCluster took 4m34.207263569s
	W1101 01:04:56.389254   58730 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1101 01:04:56.389292   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1101 01:04:56.504421   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:58.505542   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:56.705911   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:58.706099   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:00.706478   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:59.509462   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:02.009472   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:00.509320   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:03.007708   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:03.203884   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:05.204356   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:04.009580   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:06.508160   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:05.505057   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:07.506811   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:10.004080   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:07.205229   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:09.206089   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:08.509319   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:11.009099   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:12.261608   58730 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (15.872291337s)
	I1101 01:05:12.261694   58730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:05:12.275334   58730 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:05:12.284969   58730 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:05:12.295834   58730 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:05:12.295881   58730 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1101 01:05:12.526039   58730 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 01:05:12.005261   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:14.005683   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:11.706864   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:14.204758   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:13.508597   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:16.008784   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:16.506282   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:19.004037   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:16.205361   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:18.704890   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:18.008878   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:20.009861   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:23.201664   58730 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1101 01:05:23.201785   58730 kubeadm.go:322] [preflight] Running pre-flight checks
	I1101 01:05:23.201920   58730 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 01:05:23.202057   58730 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 01:05:23.202178   58730 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 01:05:23.202255   58730 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 01:05:23.204179   58730 out.go:204]   - Generating certificates and keys ...
	I1101 01:05:23.204304   58730 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1101 01:05:23.204384   58730 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1101 01:05:23.204480   58730 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1101 01:05:23.204557   58730 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1101 01:05:23.204639   58730 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1101 01:05:23.204715   58730 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1101 01:05:23.204792   58730 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1101 01:05:23.204884   58730 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1101 01:05:23.205007   58730 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1101 01:05:23.205133   58730 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1101 01:05:23.205195   58730 kubeadm.go:322] [certs] Using the existing "sa" key
	I1101 01:05:23.205273   58730 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 01:05:23.205332   58730 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 01:05:23.205391   58730 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 01:05:23.205461   58730 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 01:05:23.205550   58730 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 01:05:23.205656   58730 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 01:05:23.205734   58730 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 01:05:23.207792   58730 out.go:204]   - Booting up control plane ...
	I1101 01:05:23.207914   58730 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 01:05:23.208028   58730 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 01:05:23.208124   58730 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 01:05:23.208244   58730 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 01:05:23.208322   58730 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 01:05:23.208356   58730 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1101 01:05:23.208496   58730 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 01:05:23.208569   58730 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003034 seconds
	I1101 01:05:23.208662   58730 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 01:05:23.208762   58730 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 01:05:23.208840   58730 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 01:05:23.209055   58730 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-754132 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 01:05:23.209148   58730 kubeadm.go:322] [bootstrap-token] Using token: j0j8ab.rja1mh5j9krst0k4
	I1101 01:05:23.210755   58730 out.go:204]   - Configuring RBAC rules ...
	I1101 01:05:23.210895   58730 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 01:05:23.211001   58730 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 01:05:23.211205   58730 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 01:05:23.211369   58730 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 01:05:23.211509   58730 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 01:05:23.211617   58730 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 01:05:23.211776   58730 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 01:05:23.211851   58730 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1101 01:05:23.211894   58730 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1101 01:05:23.211901   58730 kubeadm.go:322] 
	I1101 01:05:23.211985   58730 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1101 01:05:23.211992   58730 kubeadm.go:322] 
	I1101 01:05:23.212076   58730 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1101 01:05:23.212085   58730 kubeadm.go:322] 
	I1101 01:05:23.212128   58730 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1101 01:05:23.212205   58730 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 01:05:23.212256   58730 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 01:05:23.212263   58730 kubeadm.go:322] 
	I1101 01:05:23.212305   58730 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1101 01:05:23.212314   58730 kubeadm.go:322] 
	I1101 01:05:23.212352   58730 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 01:05:23.212359   58730 kubeadm.go:322] 
	I1101 01:05:23.212400   58730 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1101 01:05:23.212461   58730 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 01:05:23.212568   58730 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 01:05:23.212584   58730 kubeadm.go:322] 
	I1101 01:05:23.212699   58730 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 01:05:23.212787   58730 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1101 01:05:23.212797   58730 kubeadm.go:322] 
	I1101 01:05:23.212862   58730 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token j0j8ab.rja1mh5j9krst0k4 \
	I1101 01:05:23.212943   58730 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 \
	I1101 01:05:23.212962   58730 kubeadm.go:322] 	--control-plane 
	I1101 01:05:23.212968   58730 kubeadm.go:322] 
	I1101 01:05:23.213083   58730 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1101 01:05:23.213093   58730 kubeadm.go:322] 
	I1101 01:05:23.213202   58730 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token j0j8ab.rja1mh5j9krst0k4 \
	I1101 01:05:23.213346   58730 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 
	I1101 01:05:23.213366   58730 cni.go:84] Creating CNI manager for ""
	I1101 01:05:23.213375   58730 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:05:23.215058   58730 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:05:23.216515   58730 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:05:23.251532   58730 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:05:21.007674   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:23.505067   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:21.204745   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:23.206316   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:25.211036   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:22.507158   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:24.508157   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:26.508990   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:23.291112   58730 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 01:05:23.291192   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:23.291224   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9 minikube.k8s.io/name=embed-certs-754132 minikube.k8s.io/updated_at=2023_11_01T01_05_23_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:23.452410   58730 ops.go:34] apiserver oom_adj: -16
	I1101 01:05:23.635798   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:23.754993   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:24.350830   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:24.850468   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:25.350887   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:25.850719   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:26.350946   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:26.850869   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:27.350851   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:27.850856   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:25.507083   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:27.511273   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:29.974545   59148 pod_ready.go:81] duration metric: took 4m0.000148043s waiting for pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace to be "Ready" ...
	E1101 01:05:29.974585   59148 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1101 01:05:29.974607   59148 pod_ready.go:38] duration metric: took 4m5.715718658s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:05:29.974652   59148 kubeadm.go:640] restartCluster took 4m26.139306333s
	W1101 01:05:29.974746   59148 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1101 01:05:29.974779   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1101 01:05:27.704338   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:30.205751   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:29.008649   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:31.009235   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:28.350920   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:28.850670   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:29.350172   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:29.850241   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:30.351225   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:30.851276   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:31.350289   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:31.850999   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:32.350874   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:32.850500   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:32.708147   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:35.205568   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:33.351023   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:33.851109   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:34.351257   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:34.850212   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:35.350277   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:35.850281   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:36.350770   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:36.456508   58730 kubeadm.go:1081] duration metric: took 13.165385995s to wait for elevateKubeSystemPrivileges.
	I1101 01:05:36.456550   58730 kubeadm.go:406] StartCluster complete in 5m14.31984828s
	I1101 01:05:36.456575   58730 settings.go:142] acquiring lock: {Name:mk7f269e64dfd8d176737f993e01f6e6badafbc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:05:36.456674   58730 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 01:05:36.458488   58730 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/kubeconfig: {Name:mk08da65b6c71084e1cfafb19800038e8c8303e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:05:36.458789   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 01:05:36.458936   58730 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1101 01:05:36.459029   58730 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-754132"
	I1101 01:05:36.459061   58730 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-754132"
	W1101 01:05:36.459076   58730 addons.go:240] addon storage-provisioner should already be in state true
	I1101 01:05:36.459086   58730 addons.go:69] Setting metrics-server=true in profile "embed-certs-754132"
	I1101 01:05:36.459124   58730 addons.go:231] Setting addon metrics-server=true in "embed-certs-754132"
	I1101 01:05:36.459134   58730 host.go:66] Checking if "embed-certs-754132" exists ...
	I1101 01:05:36.459060   58730 config.go:182] Loaded profile config "embed-certs-754132": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:05:36.459062   58730 addons.go:69] Setting default-storageclass=true in profile "embed-certs-754132"
	I1101 01:05:36.459219   58730 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-754132"
	W1101 01:05:36.459138   58730 addons.go:240] addon metrics-server should already be in state true
	I1101 01:05:36.459347   58730 host.go:66] Checking if "embed-certs-754132" exists ...
	I1101 01:05:36.459588   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:05:36.459633   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:05:36.459638   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:05:36.459674   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:05:36.459689   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:05:36.459713   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:05:36.477136   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40825
	I1101 01:05:36.477207   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37895
	I1101 01:05:36.477706   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46261
	I1101 01:05:36.477874   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:05:36.477889   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:05:36.478086   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:05:36.478388   58730 main.go:141] libmachine: Using API Version  1
	I1101 01:05:36.478405   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:05:36.478540   58730 main.go:141] libmachine: Using API Version  1
	I1101 01:05:36.478561   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:05:36.478601   58730 main.go:141] libmachine: Using API Version  1
	I1101 01:05:36.478622   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:05:36.478794   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:05:36.478990   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:05:36.479037   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:05:36.479219   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetState
	I1101 01:05:36.479379   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:05:36.479412   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:05:36.479587   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:05:36.479623   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:05:36.483272   58730 addons.go:231] Setting addon default-storageclass=true in "embed-certs-754132"
	W1101 01:05:36.483295   58730 addons.go:240] addon default-storageclass should already be in state true
	I1101 01:05:36.483318   58730 host.go:66] Checking if "embed-certs-754132" exists ...
	I1101 01:05:36.483665   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:05:36.483696   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:05:36.498137   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46727
	I1101 01:05:36.498148   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37157
	I1101 01:05:36.498530   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:05:36.499000   58730 main.go:141] libmachine: Using API Version  1
	I1101 01:05:36.499024   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:05:36.499329   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:05:36.499499   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetState
	I1101 01:05:36.501223   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:05:36.503752   58730 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:05:36.505580   58730 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:05:36.505600   58730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 01:05:36.505617   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:05:36.505756   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37761
	I1101 01:05:36.506307   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:05:36.506765   58730 main.go:141] libmachine: Using API Version  1
	I1101 01:05:36.506783   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:05:36.507257   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:05:36.507303   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:05:36.507766   58730 main.go:141] libmachine: Using API Version  1
	I1101 01:05:36.507786   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:05:36.507852   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:05:36.507894   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:05:36.508136   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:05:36.508296   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetState
	I1101 01:05:36.509982   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:05:36.512303   58730 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1101 01:05:36.512065   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:05:36.513712   58730 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 01:05:36.513728   58730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 01:05:36.513749   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:05:36.512082   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:05:36.513819   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:05:36.513839   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:05:36.516632   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:05:36.516867   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:05:36.517052   58730 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa Username:docker}
	I1101 01:05:36.517489   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:05:36.518036   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:05:36.518058   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:05:36.518360   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:05:36.519431   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:05:36.519602   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:05:36.519742   58730 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa Username:docker}
	I1101 01:05:36.526881   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35481
	I1101 01:05:36.527462   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:05:36.527889   58730 main.go:141] libmachine: Using API Version  1
	I1101 01:05:36.527902   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:05:36.528341   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:05:36.528511   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetState
	I1101 01:05:36.530250   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:05:36.530539   58730 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 01:05:36.530557   58730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 01:05:36.530575   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:05:36.533671   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:05:36.534068   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:05:36.534093   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:05:36.534368   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:05:36.534596   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:05:36.534741   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:05:36.534821   58730 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa Username:docker}
	I1101 01:05:36.559098   58730 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-754132" context rescaled to 1 replicas
	I1101 01:05:36.559135   58730 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.83 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 01:05:36.561061   58730 out.go:177] * Verifying Kubernetes components...
	I1101 01:05:33.009726   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:35.507972   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:36.562382   58730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:05:36.684098   58730 node_ready.go:35] waiting up to 6m0s for node "embed-certs-754132" to be "Ready" ...
	I1101 01:05:36.684219   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 01:05:36.689836   58730 node_ready.go:49] node "embed-certs-754132" has status "Ready":"True"
	I1101 01:05:36.689863   58730 node_ready.go:38] duration metric: took 5.731179ms waiting for node "embed-certs-754132" to be "Ready" ...
	I1101 01:05:36.689875   58730 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:05:36.707509   58730 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:05:36.743671   58730 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 01:05:36.743702   58730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1101 01:05:36.764886   58730 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:36.773895   58730 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 01:05:36.810064   58730 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 01:05:36.810095   58730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 01:05:36.888833   58730 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:05:36.888854   58730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 01:05:36.892836   58730 pod_ready.go:92] pod "etcd-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:05:36.892864   58730 pod_ready.go:81] duration metric: took 127.938482ms waiting for pod "etcd-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:36.892879   58730 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:36.968554   58730 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:05:36.978210   58730 pod_ready.go:92] pod "kube-apiserver-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:05:36.978239   58730 pod_ready.go:81] duration metric: took 85.351942ms waiting for pod "kube-apiserver-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:36.978254   58730 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:37.154956   58730 pod_ready.go:92] pod "kube-controller-manager-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:05:37.154983   58730 pod_ready.go:81] duration metric: took 176.720364ms waiting for pod "kube-controller-manager-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:37.154997   58730 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cwbfz" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:38.405267   58730 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.720993157s)
	I1101 01:05:38.405302   58730 start.go:926] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1101 01:05:38.840834   58730 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.133283925s)
	I1101 01:05:38.840891   58730 main.go:141] libmachine: Making call to close driver server
	I1101 01:05:38.840906   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Close
	I1101 01:05:38.840918   58730 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.066970508s)
	I1101 01:05:38.841048   58730 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.872463156s)
	I1101 01:05:38.841085   58730 main.go:141] libmachine: Making call to close driver server
	I1101 01:05:38.841098   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Close
	I1101 01:05:38.841320   58730 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:05:38.841370   58730 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:05:38.841373   58730 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:05:38.841328   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Closing plugin on server side
	I1101 01:05:38.841400   58730 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:05:38.841412   58730 main.go:141] libmachine: Making call to close driver server
	I1101 01:05:38.841426   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Close
	I1101 01:05:38.841390   58730 main.go:141] libmachine: Making call to close driver server
	I1101 01:05:38.841442   58730 main.go:141] libmachine: Making call to close driver server
	I1101 01:05:38.841454   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Close
	I1101 01:05:38.841457   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Close
	I1101 01:05:38.841354   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Closing plugin on server side
	I1101 01:05:38.844717   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Closing plugin on server side
	I1101 01:05:38.844730   58730 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:05:38.844723   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Closing plugin on server side
	I1101 01:05:38.844744   58730 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:05:38.844753   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Closing plugin on server side
	I1101 01:05:38.844757   58730 addons.go:467] Verifying addon metrics-server=true in "embed-certs-754132"
	I1101 01:05:38.844763   58730 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:05:38.844774   58730 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:05:38.844773   58730 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:05:38.844789   58730 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:05:38.844799   58730 main.go:141] libmachine: Making call to close driver server
	I1101 01:05:38.844808   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Close
	I1101 01:05:38.845059   58730 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:05:38.845077   58730 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:05:38.845092   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Closing plugin on server side
	I1101 01:05:38.890752   58730 main.go:141] libmachine: Making call to close driver server
	I1101 01:05:38.890785   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Close
	I1101 01:05:38.891075   58730 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:05:38.891095   58730 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:05:38.891108   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Closing plugin on server side
	I1101 01:05:38.892878   58730 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I1101 01:05:37.706877   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:39.707206   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:38.894405   58730 addons.go:502] enable addons completed in 2.435477984s: enabled=[metrics-server storage-provisioner default-storageclass]
	I1101 01:05:39.279100   58730 pod_ready.go:102] pod "kube-proxy-cwbfz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:40.775597   58730 pod_ready.go:92] pod "kube-proxy-cwbfz" in "kube-system" namespace has status "Ready":"True"
	I1101 01:05:40.775622   58730 pod_ready.go:81] duration metric: took 3.620618998s waiting for pod "kube-proxy-cwbfz" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:40.775644   58730 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:40.782773   58730 pod_ready.go:92] pod "kube-scheduler-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:05:40.782796   58730 pod_ready.go:81] duration metric: took 7.145643ms waiting for pod "kube-scheduler-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:40.782806   58730 pod_ready.go:38] duration metric: took 4.092919772s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:05:40.782821   58730 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:05:40.782868   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:05:40.811977   58730 api_server.go:72] duration metric: took 4.252812827s to wait for apiserver process to appear ...
	I1101 01:05:40.812000   58730 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:05:40.812017   58730 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8443/healthz ...
	I1101 01:05:40.817524   58730 api_server.go:279] https://192.168.61.83:8443/healthz returned 200:
	ok
	I1101 01:05:40.819599   58730 api_server.go:141] control plane version: v1.28.3
	I1101 01:05:40.819625   58730 api_server.go:131] duration metric: took 7.617418ms to wait for apiserver health ...
	I1101 01:05:40.819636   58730 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:05:40.826677   58730 system_pods.go:59] 8 kube-system pods found
	I1101 01:05:40.826714   58730 system_pods.go:61] "coredns-5dd5756b68-6kqbc" [e03e6370-35d1-4438-8b18-d62b0a253ea6] Running
	I1101 01:05:40.826722   58730 system_pods.go:61] "etcd-embed-certs-754132" [2cd8789c-8ba8-47ea-82f2-e461cbc9d3b3] Running
	I1101 01:05:40.826729   58730 system_pods.go:61] "kube-apiserver-embed-certs-754132" [81bd13a3-37ea-4bf6-9eb9-e66318137a21] Running
	I1101 01:05:40.826735   58730 system_pods.go:61] "kube-controller-manager-embed-certs-754132" [6aa18435-1990-479b-b975-7ac1d794d967] Running
	I1101 01:05:40.826742   58730 system_pods.go:61] "kube-proxy-cwbfz" [b7f5ba1e-bd63-456b-94cc-0e2c121b7792] Running
	I1101 01:05:40.826748   58730 system_pods.go:61] "kube-scheduler-embed-certs-754132" [64203f31-7c41-42d0-9d6b-bc63e1b423cc] Running
	I1101 01:05:40.826758   58730 system_pods.go:61] "metrics-server-57f55c9bc5-499xs" [617aecda-f132-4358-9da9-bbc4fc625da0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:05:40.826773   58730 system_pods.go:61] "storage-provisioner" [7feb8931-83d0-4968-a295-a4202e8fc8c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 01:05:40.826786   58730 system_pods.go:74] duration metric: took 7.142747ms to wait for pod list to return data ...
	I1101 01:05:40.826799   58730 default_sa.go:34] waiting for default service account to be created ...
	I1101 01:05:40.831268   58730 default_sa.go:45] found service account: "default"
	I1101 01:05:40.831295   58730 default_sa.go:55] duration metric: took 4.485602ms for default service account to be created ...
	I1101 01:05:40.831309   58730 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 01:05:40.891306   58730 system_pods.go:86] 8 kube-system pods found
	I1101 01:05:40.891335   58730 system_pods.go:89] "coredns-5dd5756b68-6kqbc" [e03e6370-35d1-4438-8b18-d62b0a253ea6] Running
	I1101 01:05:40.891341   58730 system_pods.go:89] "etcd-embed-certs-754132" [2cd8789c-8ba8-47ea-82f2-e461cbc9d3b3] Running
	I1101 01:05:40.891346   58730 system_pods.go:89] "kube-apiserver-embed-certs-754132" [81bd13a3-37ea-4bf6-9eb9-e66318137a21] Running
	I1101 01:05:40.891350   58730 system_pods.go:89] "kube-controller-manager-embed-certs-754132" [6aa18435-1990-479b-b975-7ac1d794d967] Running
	I1101 01:05:40.891354   58730 system_pods.go:89] "kube-proxy-cwbfz" [b7f5ba1e-bd63-456b-94cc-0e2c121b7792] Running
	I1101 01:05:40.891358   58730 system_pods.go:89] "kube-scheduler-embed-certs-754132" [64203f31-7c41-42d0-9d6b-bc63e1b423cc] Running
	I1101 01:05:40.891366   58730 system_pods.go:89] "metrics-server-57f55c9bc5-499xs" [617aecda-f132-4358-9da9-bbc4fc625da0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:05:40.891373   58730 system_pods.go:89] "storage-provisioner" [7feb8931-83d0-4968-a295-a4202e8fc8c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 01:05:40.891381   58730 system_pods.go:126] duration metric: took 60.065984ms to wait for k8s-apps to be running ...
	I1101 01:05:40.891391   58730 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 01:05:40.891436   58730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:05:40.906845   58730 system_svc.go:56] duration metric: took 15.443235ms WaitForService to wait for kubelet.
	I1101 01:05:40.906875   58730 kubeadm.go:581] duration metric: took 4.347718478s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1101 01:05:40.906895   58730 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:05:41.089628   58730 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:05:41.089654   58730 node_conditions.go:123] node cpu capacity is 2
	I1101 01:05:41.089664   58730 node_conditions.go:105] duration metric: took 182.764311ms to run NodePressure ...
	I1101 01:05:41.089674   58730 start.go:228] waiting for startup goroutines ...
	I1101 01:05:41.089680   58730 start.go:233] waiting for cluster config update ...
	I1101 01:05:41.089693   58730 start.go:242] writing updated cluster config ...
	I1101 01:05:41.089950   58730 ssh_runner.go:195] Run: rm -f paused
	I1101 01:05:41.140594   58730 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1101 01:05:41.143142   58730 out.go:177] * Done! kubectl is now configured to use "embed-certs-754132" cluster and "default" namespace by default
	I1101 01:05:37.516552   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:40.009373   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:43.882201   59148 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.907397495s)
	I1101 01:05:43.882275   59148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:05:43.897793   59148 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:05:43.908350   59148 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:05:43.919013   59148 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:05:43.919066   59148 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1101 01:05:43.992534   59148 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1101 01:05:43.992653   59148 kubeadm.go:322] [preflight] Running pre-flight checks
	I1101 01:05:44.162750   59148 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 01:05:44.162906   59148 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 01:05:44.163052   59148 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 01:05:44.398016   59148 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 01:05:44.399998   59148 out.go:204]   - Generating certificates and keys ...
	I1101 01:05:44.400102   59148 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1101 01:05:44.400189   59148 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1101 01:05:44.400334   59148 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1101 01:05:44.400431   59148 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1101 01:05:44.400526   59148 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1101 01:05:44.400602   59148 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1101 01:05:44.400736   59148 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1101 01:05:44.400821   59148 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1101 01:05:44.401336   59148 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1101 01:05:44.401936   59148 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1101 01:05:44.402420   59148 kubeadm.go:322] [certs] Using the existing "sa" key
	I1101 01:05:44.402515   59148 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 01:05:44.470807   59148 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 01:05:44.642677   59148 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 01:05:44.768991   59148 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 01:05:45.052817   59148 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 01:05:45.053698   59148 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 01:05:45.056339   59148 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 01:05:42.204108   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:44.205679   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:42.508073   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:43.201762   58823 pod_ready.go:81] duration metric: took 4m0.000100455s waiting for pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace to be "Ready" ...
	E1101 01:05:43.201795   58823 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1101 01:05:43.201816   58823 pod_ready.go:38] duration metric: took 4m1.199592624s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:05:43.201848   58823 kubeadm.go:640] restartCluster took 4m57.555406731s
	W1101 01:05:43.201899   58823 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1101 01:05:43.201920   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1101 01:05:45.058304   59148 out.go:204]   - Booting up control plane ...
	I1101 01:05:45.058434   59148 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 01:05:45.058565   59148 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 01:05:45.060937   59148 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 01:05:45.078776   59148 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 01:05:45.079692   59148 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 01:05:45.079771   59148 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1101 01:05:45.204880   59148 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 01:05:46.208575   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:48.705698   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:50.708163   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:48.240337   58823 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.038387523s)
	I1101 01:05:48.240417   58823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:05:48.257585   58823 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:05:48.266949   58823 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:05:48.277302   58823 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:05:48.277346   58823 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1101 01:05:48.514394   58823 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 01:05:54.708746   59148 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.503354 seconds
	I1101 01:05:54.708894   59148 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 01:05:54.726194   59148 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 01:05:55.266392   59148 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 01:05:55.266670   59148 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-639310 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 01:05:55.783906   59148 kubeadm.go:322] [bootstrap-token] Using token: ilpx6n.m6vs8mqxrjuf2w8f
	I1101 01:05:53.205312   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:55.206016   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:55.786231   59148 out.go:204]   - Configuring RBAC rules ...
	I1101 01:05:55.786370   59148 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 01:05:55.793682   59148 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 01:05:55.812319   59148 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 01:05:55.819324   59148 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 01:05:55.825785   59148 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 01:05:55.831793   59148 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 01:05:55.858443   59148 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 01:05:56.195472   59148 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1101 01:05:56.248405   59148 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1101 01:05:56.249655   59148 kubeadm.go:322] 
	I1101 01:05:56.249745   59148 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1101 01:05:56.249759   59148 kubeadm.go:322] 
	I1101 01:05:56.249852   59148 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1101 01:05:56.249869   59148 kubeadm.go:322] 
	I1101 01:05:56.249931   59148 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1101 01:05:56.249992   59148 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 01:05:56.250076   59148 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 01:05:56.250088   59148 kubeadm.go:322] 
	I1101 01:05:56.250163   59148 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1101 01:05:56.250172   59148 kubeadm.go:322] 
	I1101 01:05:56.250261   59148 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 01:05:56.250281   59148 kubeadm.go:322] 
	I1101 01:05:56.250344   59148 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1101 01:05:56.250436   59148 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 01:05:56.250560   59148 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 01:05:56.250574   59148 kubeadm.go:322] 
	I1101 01:05:56.250663   59148 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 01:05:56.250757   59148 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1101 01:05:56.250769   59148 kubeadm.go:322] 
	I1101 01:05:56.250881   59148 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token ilpx6n.m6vs8mqxrjuf2w8f \
	I1101 01:05:56.251011   59148 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 \
	I1101 01:05:56.251041   59148 kubeadm.go:322] 	--control-plane 
	I1101 01:05:56.251053   59148 kubeadm.go:322] 
	I1101 01:05:56.251150   59148 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1101 01:05:56.251162   59148 kubeadm.go:322] 
	I1101 01:05:56.251259   59148 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token ilpx6n.m6vs8mqxrjuf2w8f \
	I1101 01:05:56.251383   59148 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 
	I1101 01:05:56.251922   59148 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 01:05:56.251982   59148 cni.go:84] Creating CNI manager for ""
	I1101 01:05:56.252008   59148 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:05:56.254247   59148 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:05:56.256068   59148 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:05:56.281994   59148 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:05:56.324660   59148 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 01:05:56.324796   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:56.324863   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9 minikube.k8s.io/name=default-k8s-diff-port-639310 minikube.k8s.io/updated_at=2023_11_01T01_05_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:56.739064   59148 ops.go:34] apiserver oom_adj: -16
	I1101 01:05:56.739245   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:56.834852   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:57.432044   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:57.931920   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:58.432414   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:58.932871   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:59.432755   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:59.932515   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:57.704234   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:59.705516   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:01.231970   58823 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1101 01:06:01.232062   58823 kubeadm.go:322] [preflight] Running pre-flight checks
	I1101 01:06:01.232156   58823 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 01:06:01.232289   58823 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 01:06:01.232419   58823 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 01:06:01.232595   58823 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 01:06:01.232714   58823 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 01:06:01.232790   58823 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1101 01:06:01.232890   58823 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 01:06:01.235429   58823 out.go:204]   - Generating certificates and keys ...
	I1101 01:06:01.235533   58823 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1101 01:06:01.235606   58823 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1101 01:06:01.235675   58823 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1101 01:06:01.235782   58823 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1101 01:06:01.235889   58823 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1101 01:06:01.235973   58823 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1101 01:06:01.236065   58823 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1101 01:06:01.236153   58823 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1101 01:06:01.236263   58823 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1101 01:06:01.236383   58823 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1101 01:06:01.236447   58823 kubeadm.go:322] [certs] Using the existing "sa" key
	I1101 01:06:01.236528   58823 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 01:06:01.236607   58823 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 01:06:01.236728   58823 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 01:06:01.236811   58823 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 01:06:01.236877   58823 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 01:06:01.236955   58823 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 01:06:01.238699   58823 out.go:204]   - Booting up control plane ...
	I1101 01:06:01.238808   58823 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 01:06:01.238904   58823 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 01:06:01.238990   58823 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 01:06:01.239092   58823 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 01:06:01.239289   58823 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 01:06:01.239387   58823 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.004023 seconds
	I1101 01:06:01.239528   58823 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 01:06:01.239741   58823 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 01:06:01.239796   58823 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 01:06:01.239971   58823 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-330042 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1101 01:06:01.240056   58823 kubeadm.go:322] [bootstrap-token] Using token: lseik6.3ozwuciianl7vrri
	I1101 01:06:01.241690   58823 out.go:204]   - Configuring RBAC rules ...
	I1101 01:06:01.241825   58823 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 01:06:01.242015   58823 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 01:06:01.242170   58823 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 01:06:01.242265   58823 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 01:06:01.242380   58823 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 01:06:01.242448   58823 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1101 01:06:01.242517   58823 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1101 01:06:01.242549   58823 kubeadm.go:322] 
	I1101 01:06:01.242631   58823 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1101 01:06:01.242646   58823 kubeadm.go:322] 
	I1101 01:06:01.242753   58823 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1101 01:06:01.242764   58823 kubeadm.go:322] 
	I1101 01:06:01.242801   58823 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1101 01:06:01.242883   58823 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 01:06:01.242956   58823 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 01:06:01.242965   58823 kubeadm.go:322] 
	I1101 01:06:01.243041   58823 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1101 01:06:01.243152   58823 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 01:06:01.243249   58823 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 01:06:01.243261   58823 kubeadm.go:322] 
	I1101 01:06:01.243357   58823 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1101 01:06:01.243421   58823 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1101 01:06:01.243425   58823 kubeadm.go:322] 
	I1101 01:06:01.243490   58823 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token lseik6.3ozwuciianl7vrri \
	I1101 01:06:01.243597   58823 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 \
	I1101 01:06:01.243619   58823 kubeadm.go:322]     --control-plane 	  
	I1101 01:06:01.243623   58823 kubeadm.go:322] 
	I1101 01:06:01.243697   58823 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1101 01:06:01.243702   58823 kubeadm.go:322] 
	I1101 01:06:01.243773   58823 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token lseik6.3ozwuciianl7vrri \
	I1101 01:06:01.243923   58823 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 
	I1101 01:06:01.243967   58823 cni.go:84] Creating CNI manager for ""
	I1101 01:06:01.243979   58823 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:06:01.246766   58823 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:06:01.248244   58823 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:06:01.274713   58823 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:06:01.299087   58823 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 01:06:01.299184   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:01.299241   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9 minikube.k8s.io/name=old-k8s-version-330042 minikube.k8s.io/updated_at=2023_11_01T01_06_01_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:01.350480   58823 ops.go:34] apiserver oom_adj: -16
	I1101 01:06:01.668212   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:01.795923   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:02.398955   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:00.432038   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:00.932486   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:01.431924   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:01.932050   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:02.432828   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:02.932070   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:03.432833   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:03.931826   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:04.432522   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:04.932660   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:01.705717   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:04.205431   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:02.899285   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:03.398507   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:03.898445   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:04.399301   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:04.898647   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:05.399211   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:05.899099   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:06.398426   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:06.898703   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:07.399266   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:05.431880   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:05.932001   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:06.432804   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:06.932744   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:07.432405   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:07.932531   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:08.432007   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:08.560694   59148 kubeadm.go:1081] duration metric: took 12.235943971s to wait for elevateKubeSystemPrivileges.
	I1101 01:06:08.560733   59148 kubeadm.go:406] StartCluster complete in 5m4.77698433s
	I1101 01:06:08.560756   59148 settings.go:142] acquiring lock: {Name:mk7f269e64dfd8d176737f993e01f6e6badafbc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:06:08.560862   59148 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 01:06:08.563346   59148 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/kubeconfig: {Name:mk08da65b6c71084e1cfafb19800038e8c8303e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:06:08.563655   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 01:06:08.563793   59148 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1101 01:06:08.563857   59148 config.go:182] Loaded profile config "default-k8s-diff-port-639310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:06:08.563874   59148 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-639310"
	I1101 01:06:08.563892   59148 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-639310"
	I1101 01:06:08.563905   59148 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-639310"
	I1101 01:06:08.563917   59148 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-639310"
	I1101 01:06:08.563950   59148 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-639310"
	I1101 01:06:08.563899   59148 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-639310"
	W1101 01:06:08.563962   59148 addons.go:240] addon metrics-server should already be in state true
	W1101 01:06:08.563990   59148 addons.go:240] addon storage-provisioner should already be in state true
	I1101 01:06:08.564025   59148 host.go:66] Checking if "default-k8s-diff-port-639310" exists ...
	I1101 01:06:08.564064   59148 host.go:66] Checking if "default-k8s-diff-port-639310" exists ...
	I1101 01:06:08.564369   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:08.564404   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:08.564421   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:08.564453   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:08.564455   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:08.564488   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:08.581714   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37509
	I1101 01:06:08.582180   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:08.583081   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35137
	I1101 01:06:08.583312   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:06:08.583332   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:08.583553   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41541
	I1101 01:06:08.583702   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:08.583714   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:08.583891   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:08.584174   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:06:08.584200   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:08.584272   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:08.584302   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:08.584638   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:06:08.584687   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:08.584737   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:08.584993   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:08.585152   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetState
	I1101 01:06:08.585215   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:08.585256   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:08.588703   59148 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-639310"
	W1101 01:06:08.588728   59148 addons.go:240] addon default-storageclass should already be in state true
	I1101 01:06:08.588754   59148 host.go:66] Checking if "default-k8s-diff-port-639310" exists ...
	I1101 01:06:08.589158   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:08.589193   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:08.600826   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40787
	I1101 01:06:08.601314   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:08.601952   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:06:08.601976   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:08.602335   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:08.602560   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetState
	I1101 01:06:08.603276   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35887
	I1101 01:06:08.603415   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36765
	I1101 01:06:08.603803   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:08.604098   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:08.604276   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:06:08.604290   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:08.604490   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:06:08.604506   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:08.604573   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:06:08.604778   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:08.606338   59148 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:06:08.605001   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:08.605380   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:08.607632   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:08.607705   59148 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:06:08.607717   59148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 01:06:08.607731   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:06:08.607995   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetState
	I1101 01:06:08.610424   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:06:08.612025   59148 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1101 01:06:08.613346   59148 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 01:06:08.613365   59148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 01:06:08.613386   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:06:08.611304   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:06:08.611864   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:06:08.613461   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:06:08.613508   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:06:08.613650   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:06:08.613769   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:06:08.613869   59148 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa Username:docker}
	I1101 01:06:08.618717   59148 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-639310" context rescaled to 1 replicas
	I1101 01:06:08.618755   59148 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.97 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 01:06:08.620291   59148 out.go:177] * Verifying Kubernetes components...
	I1101 01:06:08.618896   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:06:08.620048   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:06:08.621662   59148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:06:08.621747   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:06:08.621777   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:06:08.622129   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:06:08.622359   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:06:08.622526   59148 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa Username:docker}
	I1101 01:06:08.629241   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42169
	I1101 01:06:08.629773   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:08.630164   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:06:08.630181   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:08.630428   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:08.630558   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetState
	I1101 01:06:08.631892   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:06:08.632176   59148 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 01:06:08.632197   59148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 01:06:08.632216   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:06:08.634872   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:06:08.635211   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:06:08.635241   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:06:08.635375   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:06:08.635576   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:06:08.635713   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:06:08.635839   59148 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa Username:docker}
	I1101 01:06:08.984005   59148 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 01:06:08.984032   59148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1101 01:06:08.991838   59148 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-639310" to be "Ready" ...
	I1101 01:06:08.991921   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 01:06:09.011096   59148 node_ready.go:49] node "default-k8s-diff-port-639310" has status "Ready":"True"
	I1101 01:06:09.011124   59148 node_ready.go:38] duration metric: took 19.250763ms waiting for node "default-k8s-diff-port-639310" to be "Ready" ...
	I1101 01:06:09.011136   59148 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:06:09.043526   59148 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:09.071032   59148 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 01:06:09.071065   59148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 01:06:09.089683   59148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 01:06:09.090332   59148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:06:09.139676   59148 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:06:09.139702   59148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 01:06:09.219436   59148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:06:06.705499   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:09.207584   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:09.922465   58676 pod_ready.go:81] duration metric: took 4m0.000913678s waiting for pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace to be "Ready" ...
	E1101 01:06:09.922511   58676 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1101 01:06:09.922529   58676 pod_ready.go:38] duration metric: took 4m11.570999497s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:06:09.922566   58676 kubeadm.go:640] restartCluster took 4m30.866358786s
	W1101 01:06:09.922644   58676 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1101 01:06:09.922688   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1101 01:06:11.075881   59148 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.083916099s)
	I1101 01:06:11.075915   59148 start.go:926] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1101 01:06:11.075946   59148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.986221728s)
	I1101 01:06:11.075997   59148 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:11.076012   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Close
	I1101 01:06:11.076348   59148 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:11.076367   59148 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:11.076377   59148 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:11.076386   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Close
	I1101 01:06:11.076620   59148 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:11.076639   59148 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:11.119713   59148 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:11.119741   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Close
	I1101 01:06:11.120145   59148 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:11.120170   59148 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:11.120145   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | Closing plugin on server side
	I1101 01:06:11.172242   59148 pod_ready.go:102] pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:11.954880   59148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.864508967s)
	I1101 01:06:11.954945   59148 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:11.954960   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Close
	I1101 01:06:11.955014   59148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.735537793s)
	I1101 01:06:11.955074   59148 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:11.955088   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Close
	I1101 01:06:11.955379   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | Closing plugin on server side
	I1101 01:06:11.955411   59148 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:11.955418   59148 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:11.955429   59148 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:11.955438   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Close
	I1101 01:06:11.957487   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | Closing plugin on server side
	I1101 01:06:11.957532   59148 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:11.957549   59148 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:11.957537   59148 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:11.957612   59148 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:11.957566   59148 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-639310"
	I1101 01:06:11.957643   59148 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:11.957672   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Close
	I1101 01:06:11.958036   59148 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:11.958063   59148 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:11.960489   59148 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I1101 01:06:07.899402   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:08.398731   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:08.898547   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:09.399015   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:09.898437   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:10.399024   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:10.899108   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:11.398482   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:11.898943   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:12.399022   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:11.962129   59148 addons.go:502] enable addons completed in 3.39833009s: enabled=[default-storageclass metrics-server storage-provisioner]
	I1101 01:06:13.684297   59148 pod_ready.go:102] pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:12.899212   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:13.398415   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:13.898444   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:14.398630   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:14.898427   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:15.399212   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:15.898869   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:16.399289   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:16.588122   58823 kubeadm.go:1081] duration metric: took 15.28901357s to wait for elevateKubeSystemPrivileges.
	I1101 01:06:16.588166   58823 kubeadm.go:406] StartCluster complete in 5m31.002121514s
	I1101 01:06:16.588190   58823 settings.go:142] acquiring lock: {Name:mk7f269e64dfd8d176737f993e01f6e6badafbc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:06:16.588290   58823 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 01:06:16.590925   58823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/kubeconfig: {Name:mk08da65b6c71084e1cfafb19800038e8c8303e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:06:16.591235   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 01:06:16.591339   58823 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1101 01:06:16.591416   58823 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-330042"
	I1101 01:06:16.591436   58823 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-330042"
	W1101 01:06:16.591444   58823 addons.go:240] addon storage-provisioner should already be in state true
	I1101 01:06:16.591477   58823 config.go:182] Loaded profile config "old-k8s-version-330042": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1101 01:06:16.591517   58823 host.go:66] Checking if "old-k8s-version-330042" exists ...
	I1101 01:06:16.591525   58823 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-330042"
	I1101 01:06:16.591541   58823 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-330042"
	I1101 01:06:16.591923   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:16.591924   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:16.591962   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:16.591980   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:16.592045   58823 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-330042"
	I1101 01:06:16.592064   58823 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-330042"
	W1101 01:06:16.592071   58823 addons.go:240] addon metrics-server should already be in state true
	I1101 01:06:16.592104   58823 host.go:66] Checking if "old-k8s-version-330042" exists ...
	I1101 01:06:16.592424   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:16.592468   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:16.610602   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35459
	I1101 01:06:16.611188   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:16.611722   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:06:16.611752   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:16.611893   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35425
	I1101 01:06:16.612233   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:16.612315   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:16.612802   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:16.612841   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:16.613196   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:06:16.613215   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:16.613550   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:16.613571   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39319
	I1101 01:06:16.613949   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:16.614126   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:16.614159   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:16.614425   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:06:16.614438   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:16.614811   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:16.614997   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetState
	I1101 01:06:16.617747   58823 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-330042"
	W1101 01:06:16.617763   58823 addons.go:240] addon default-storageclass should already be in state true
	I1101 01:06:16.617783   58823 host.go:66] Checking if "old-k8s-version-330042" exists ...
	I1101 01:06:16.618021   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:16.618044   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:16.633877   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37903
	I1101 01:06:16.634227   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34049
	I1101 01:06:16.634436   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:16.635052   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:16.635225   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:06:16.635251   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:16.635588   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:06:16.635603   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:16.635656   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:16.636032   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:16.636092   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetState
	I1101 01:06:16.636310   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetState
	I1101 01:06:16.637897   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:06:16.640069   58823 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:06:16.638479   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:06:16.640887   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35501
	I1101 01:06:16.641511   58823 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:06:16.641523   58823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 01:06:16.641540   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:06:16.642477   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:16.643099   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:06:16.643115   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:16.643826   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:16.644397   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:16.644432   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:16.644515   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:06:16.644534   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:06:16.644549   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:06:16.644743   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:06:16.644908   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:06:16.645006   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:06:16.645102   58823 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa Username:docker}
	I1101 01:06:16.648901   58823 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1101 01:06:16.650287   58823 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 01:06:16.650299   58823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 01:06:16.650316   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:06:16.654323   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:06:16.654694   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:06:16.654720   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:06:16.655020   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:06:16.655268   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:06:16.655450   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:06:16.655600   58823 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa Username:docker}
	I1101 01:06:16.663888   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32991
	I1101 01:06:16.664490   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:16.665023   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:06:16.665049   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:16.665533   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:16.665720   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetState
	I1101 01:06:16.667516   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:06:16.667817   58823 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 01:06:16.667837   58823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 01:06:16.667856   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:06:16.670789   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:06:16.671306   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:06:16.671332   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:06:16.671519   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:06:16.671688   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:06:16.671811   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:06:16.671974   58823 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa Username:docker}
	I1101 01:06:16.738145   58823 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-330042" context rescaled to 1 replicas
	I1101 01:06:16.738193   58823 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 01:06:16.740269   58823 out.go:177] * Verifying Kubernetes components...
	I1101 01:06:16.741889   58823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:06:16.827316   58823 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 01:06:16.827347   58823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1101 01:06:16.846888   58823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:06:16.868760   58823 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-330042" to be "Ready" ...
	I1101 01:06:16.868848   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 01:06:16.885920   58823 node_ready.go:49] node "old-k8s-version-330042" has status "Ready":"True"
	I1101 01:06:16.885962   58823 node_ready.go:38] duration metric: took 17.171382ms waiting for node "old-k8s-version-330042" to be "Ready" ...
	I1101 01:06:16.885975   58823 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:06:16.907070   58823 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-v2xlz" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:16.929166   58823 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 01:06:16.929190   58823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 01:06:16.946209   58823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 01:06:17.010599   58823 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:06:17.010628   58823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 01:06:17.132054   58823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:06:17.868039   58823 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1101 01:06:17.868039   58823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.021104248s)
	I1101 01:06:17.868120   58823 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:17.868126   58823 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:17.868140   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Close
	I1101 01:06:17.868142   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Close
	I1101 01:06:17.870315   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Closing plugin on server side
	I1101 01:06:17.870338   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Closing plugin on server side
	I1101 01:06:17.870352   58823 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:17.870364   58823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:17.870378   58823 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:17.870400   58823 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:17.870429   58823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:17.870439   58823 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:17.870448   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Close
	I1101 01:06:17.870470   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Close
	I1101 01:06:17.870865   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Closing plugin on server side
	I1101 01:06:17.870866   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Closing plugin on server side
	I1101 01:06:17.870876   58823 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:17.870890   58823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:17.870899   58823 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:17.870915   58823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:17.920542   58823 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:17.920570   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Close
	I1101 01:06:17.920923   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Closing plugin on server side
	I1101 01:06:17.920969   58823 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:17.920980   58823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:18.189030   58823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.056928538s)
	I1101 01:06:18.189096   58823 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:18.189109   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Close
	I1101 01:06:18.189446   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Closing plugin on server side
	I1101 01:06:18.189464   58823 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:18.189476   58823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:18.189486   58823 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:18.189506   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Close
	I1101 01:06:18.189735   58823 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:18.189752   58823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:18.189760   58823 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-330042"
	I1101 01:06:18.192103   58823 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1101 01:06:16.156689   59148 pod_ready.go:102] pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:18.158318   59148 pod_ready.go:102] pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:18.194035   58823 addons.go:502] enable addons completed in 1.602699312s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1101 01:06:18.978162   58823 pod_ready.go:102] pod "coredns-5644d7b6d9-v2xlz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:21.456448   58823 pod_ready.go:102] pod "coredns-5644d7b6d9-v2xlz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:20.657398   59148 pod_ready.go:102] pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:22.156680   59148 pod_ready.go:97] pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.72.97 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-11-01 01:06:08 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerSt
ateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-11-01 01:06:11 +0000 UTC,FinishedAt:2023-11-01 01:06:21 +0000 UTC,ContainerID:cri-o://1ecc4b16207e32548d5d59a4bb7a01519d7e5eaf75b05171abd6c8c635656811,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://1ecc4b16207e32548d5d59a4bb7a01519d7e5eaf75b05171abd6c8c635656811 Started:0xc002af16c0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1101 01:06:22.156709   59148 pod_ready.go:81] duration metric: took 13.113156669s waiting for pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace to be "Ready" ...
	E1101 01:06:22.156718   59148 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.72.97 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-11-01 01:06:08 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Runnin
g:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-11-01 01:06:11 +0000 UTC,FinishedAt:2023-11-01 01:06:21 +0000 UTC,ContainerID:cri-o://1ecc4b16207e32548d5d59a4bb7a01519d7e5eaf75b05171abd6c8c635656811,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://1ecc4b16207e32548d5d59a4bb7a01519d7e5eaf75b05171abd6c8c635656811 Started:0xc002af16c0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1101 01:06:22.156726   59148 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rgzt8" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.163387   59148 pod_ready.go:92] pod "coredns-5dd5756b68-rgzt8" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:22.163410   59148 pod_ready.go:81] duration metric: took 6.677078ms waiting for pod "coredns-5dd5756b68-rgzt8" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.163423   59148 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.168499   59148 pod_ready.go:92] pod "etcd-default-k8s-diff-port-639310" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:22.168519   59148 pod_ready.go:81] duration metric: took 5.088683ms waiting for pod "etcd-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.168528   59148 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.174117   59148 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-639310" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:22.174143   59148 pod_ready.go:81] duration metric: took 5.607251ms waiting for pod "kube-apiserver-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.174157   59148 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.179321   59148 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-639310" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:22.179344   59148 pod_ready.go:81] duration metric: took 5.178241ms waiting for pod "kube-controller-manager-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.179356   59148 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kzgzn" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.554016   59148 pod_ready.go:92] pod "kube-proxy-kzgzn" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:22.554047   59148 pod_ready.go:81] duration metric: took 374.683914ms waiting for pod "kube-proxy-kzgzn" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.554061   59148 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.954192   59148 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-639310" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:22.954216   59148 pod_ready.go:81] duration metric: took 400.146517ms waiting for pod "kube-scheduler-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.954226   59148 pod_ready.go:38] duration metric: took 13.943077925s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:06:22.954243   59148 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:06:22.954294   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:06:22.970594   59148 api_server.go:72] duration metric: took 14.351804953s to wait for apiserver process to appear ...
	I1101 01:06:22.970621   59148 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:06:22.970638   59148 api_server.go:253] Checking apiserver healthz at https://192.168.72.97:8444/healthz ...
	I1101 01:06:22.976061   59148 api_server.go:279] https://192.168.72.97:8444/healthz returned 200:
	ok
	I1101 01:06:22.977368   59148 api_server.go:141] control plane version: v1.28.3
	I1101 01:06:22.977390   59148 api_server.go:131] duration metric: took 6.761145ms to wait for apiserver health ...
	I1101 01:06:22.977398   59148 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:06:23.156987   59148 system_pods.go:59] 8 kube-system pods found
	I1101 01:06:23.157014   59148 system_pods.go:61] "coredns-5dd5756b68-rgzt8" [6d136c6a-e0b2-44c3-a17b-85649d6ff7b7] Running
	I1101 01:06:23.157018   59148 system_pods.go:61] "etcd-default-k8s-diff-port-639310" [9cc2eba7-c72f-4a6f-9c55-8cce5586b574] Running
	I1101 01:06:23.157024   59148 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-639310" [e2b16d1e-af9f-452e-8243-5267f781ab19] Running
	I1101 01:06:23.157028   59148 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-639310" [9173e21f-a13f-4234-94a1-1976881ee23d] Running
	I1101 01:06:23.157034   59148 system_pods.go:61] "kube-proxy-kzgzn" [32d59980-f28a-482c-9aa8-8502915417f0] Running
	I1101 01:06:23.157038   59148 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-639310" [449df462-911a-4afa-8ca5-f9fccce9ecac] Running
	I1101 01:06:23.157046   59148 system_pods.go:61] "metrics-server-57f55c9bc5-65ph4" [4683706e-65f6-4845-a5ad-60da8cd20d8e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:23.157053   59148 system_pods.go:61] "storage-provisioner" [eaba9583-e564-4804-9cd3-2b4de36c85da] Running
	I1101 01:06:23.157060   59148 system_pods.go:74] duration metric: took 179.656649ms to wait for pod list to return data ...
	I1101 01:06:23.157067   59148 default_sa.go:34] waiting for default service account to be created ...
	I1101 01:06:23.352990   59148 default_sa.go:45] found service account: "default"
	I1101 01:06:23.353024   59148 default_sa.go:55] duration metric: took 195.950242ms for default service account to be created ...
	I1101 01:06:23.353034   59148 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 01:06:23.557472   59148 system_pods.go:86] 8 kube-system pods found
	I1101 01:06:23.557498   59148 system_pods.go:89] "coredns-5dd5756b68-rgzt8" [6d136c6a-e0b2-44c3-a17b-85649d6ff7b7] Running
	I1101 01:06:23.557505   59148 system_pods.go:89] "etcd-default-k8s-diff-port-639310" [9cc2eba7-c72f-4a6f-9c55-8cce5586b574] Running
	I1101 01:06:23.557512   59148 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-639310" [e2b16d1e-af9f-452e-8243-5267f781ab19] Running
	I1101 01:06:23.557518   59148 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-639310" [9173e21f-a13f-4234-94a1-1976881ee23d] Running
	I1101 01:06:23.557524   59148 system_pods.go:89] "kube-proxy-kzgzn" [32d59980-f28a-482c-9aa8-8502915417f0] Running
	I1101 01:06:23.557531   59148 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-639310" [449df462-911a-4afa-8ca5-f9fccce9ecac] Running
	I1101 01:06:23.557541   59148 system_pods.go:89] "metrics-server-57f55c9bc5-65ph4" [4683706e-65f6-4845-a5ad-60da8cd20d8e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:23.557554   59148 system_pods.go:89] "storage-provisioner" [eaba9583-e564-4804-9cd3-2b4de36c85da] Running
	I1101 01:06:23.557561   59148 system_pods.go:126] duration metric: took 204.522772ms to wait for k8s-apps to be running ...
	I1101 01:06:23.557571   59148 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 01:06:23.557614   59148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:06:23.572950   59148 system_svc.go:56] duration metric: took 15.367105ms WaitForService to wait for kubelet.
	I1101 01:06:23.572979   59148 kubeadm.go:581] duration metric: took 14.954198383s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1101 01:06:23.572995   59148 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:06:23.754816   59148 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:06:23.754852   59148 node_conditions.go:123] node cpu capacity is 2
	I1101 01:06:23.754865   59148 node_conditions.go:105] duration metric: took 181.864765ms to run NodePressure ...
	I1101 01:06:23.754879   59148 start.go:228] waiting for startup goroutines ...
	I1101 01:06:23.754887   59148 start.go:233] waiting for cluster config update ...
	I1101 01:06:23.754902   59148 start.go:242] writing updated cluster config ...
	I1101 01:06:23.755221   59148 ssh_runner.go:195] Run: rm -f paused
	I1101 01:06:23.805298   59148 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1101 01:06:23.807226   59148 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-639310" cluster and "default" namespace by default
	I1101 01:06:24.353352   58676 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.430634921s)
	I1101 01:06:24.353418   58676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:06:24.367115   58676 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:06:24.376272   58676 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:06:24.385067   58676 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:06:24.385105   58676 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1101 01:06:24.436586   58676 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1101 01:06:24.436698   58676 kubeadm.go:322] [preflight] Running pre-flight checks
	I1101 01:06:24.592267   58676 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 01:06:24.592409   58676 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 01:06:24.592529   58676 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 01:06:24.834834   58676 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 01:06:24.836680   58676 out.go:204]   - Generating certificates and keys ...
	I1101 01:06:24.836825   58676 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1101 01:06:24.836918   58676 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1101 01:06:24.837052   58676 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1101 01:06:24.837150   58676 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1101 01:06:24.837378   58676 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1101 01:06:24.838501   58676 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1101 01:06:24.838970   58676 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1101 01:06:24.839488   58676 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1101 01:06:24.840058   58676 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1101 01:06:24.840454   58676 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1101 01:06:24.840925   58676 kubeadm.go:322] [certs] Using the existing "sa" key
	I1101 01:06:24.841017   58676 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 01:06:25.117460   58676 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 01:06:25.218894   58676 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 01:06:25.319416   58676 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 01:06:25.555023   58676 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 01:06:25.555490   58676 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 01:06:25.558041   58676 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 01:06:25.559946   58676 out.go:204]   - Booting up control plane ...
	I1101 01:06:25.560090   58676 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 01:06:25.560212   58676 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 01:06:25.560321   58676 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 01:06:25.577307   58676 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 01:06:25.580427   58676 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 01:06:25.580508   58676 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1101 01:06:25.710362   58676 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 01:06:23.963710   58823 pod_ready.go:102] pod "coredns-5644d7b6d9-v2xlz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:26.455851   58823 pod_ready.go:92] pod "coredns-5644d7b6d9-v2xlz" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:26.455880   58823 pod_ready.go:81] duration metric: took 9.548782268s waiting for pod "coredns-5644d7b6d9-v2xlz" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:26.455889   58823 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hkl2m" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:26.461243   58823 pod_ready.go:92] pod "kube-proxy-hkl2m" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:26.461277   58823 pod_ready.go:81] duration metric: took 5.380815ms waiting for pod "kube-proxy-hkl2m" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:26.461289   58823 pod_ready.go:38] duration metric: took 9.575303239s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:06:26.461314   58823 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:06:26.461372   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:06:26.476212   58823 api_server.go:72] duration metric: took 9.737981323s to wait for apiserver process to appear ...
	I1101 01:06:26.476245   58823 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:06:26.476268   58823 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I1101 01:06:26.483060   58823 api_server.go:279] https://192.168.39.90:8443/healthz returned 200:
	ok
	I1101 01:06:26.484299   58823 api_server.go:141] control plane version: v1.16.0
	I1101 01:06:26.484328   58823 api_server.go:131] duration metric: took 8.074303ms to wait for apiserver health ...
	I1101 01:06:26.484342   58823 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:06:26.488710   58823 system_pods.go:59] 4 kube-system pods found
	I1101 01:06:26.488745   58823 system_pods.go:61] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:26.488753   58823 system_pods.go:61] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:26.488766   58823 system_pods.go:61] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:26.488775   58823 system_pods.go:61] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:26.488787   58823 system_pods.go:74] duration metric: took 4.438458ms to wait for pod list to return data ...
	I1101 01:06:26.488797   58823 default_sa.go:34] waiting for default service account to be created ...
	I1101 01:06:26.492513   58823 default_sa.go:45] found service account: "default"
	I1101 01:06:26.492543   58823 default_sa.go:55] duration metric: took 3.739583ms for default service account to be created ...
	I1101 01:06:26.492553   58823 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 01:06:26.496897   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:26.496924   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:26.496929   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:26.496936   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:26.496942   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:26.496956   58823 retry.go:31] will retry after 215.348005ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:26.718021   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:26.718055   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:26.718064   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:26.718080   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:26.718086   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:26.718103   58823 retry.go:31] will retry after 357.067185ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:27.080480   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:27.080519   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:27.080528   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:27.080539   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:27.080548   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:27.080572   58823 retry.go:31] will retry after 441.083478ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:27.528922   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:27.528955   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:27.528964   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:27.528975   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:27.528984   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:27.529008   58823 retry.go:31] will retry after 595.152055ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:28.129735   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:28.129760   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:28.129765   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:28.129772   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:28.129778   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:28.129794   58823 retry.go:31] will retry after 591.454083ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:28.726058   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:28.726089   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:28.726097   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:28.726108   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:28.726118   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:28.726142   58823 retry.go:31] will retry after 682.338416ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:29.414282   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:29.414311   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:29.414321   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:29.414330   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:29.414338   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:29.414356   58823 retry.go:31] will retry after 953.248535ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:30.373950   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:30.373989   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:30.373998   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:30.374017   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:30.374028   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:30.374048   58823 retry.go:31] will retry after 1.291166145s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:31.671462   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:31.671516   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:31.671526   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:31.671537   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:31.671546   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:31.671565   58823 retry.go:31] will retry after 1.413833897s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:33.713596   58676 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002646 seconds
	I1101 01:06:33.713733   58676 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 01:06:33.731994   58676 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 01:06:34.275298   58676 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 01:06:34.275497   58676 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-008483 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 01:06:34.792259   58676 kubeadm.go:322] [bootstrap-token] Using token: ft1765.cra2ecqpjz8r5s0a
	I1101 01:06:34.793944   58676 out.go:204]   - Configuring RBAC rules ...
	I1101 01:06:34.794105   58676 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 01:06:34.800902   58676 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 01:06:34.811310   58676 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 01:06:34.821309   58676 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 01:06:34.826523   58676 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 01:06:34.832305   58676 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 01:06:34.852131   58676 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 01:06:35.137771   58676 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1101 01:06:35.206006   58676 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1101 01:06:35.207223   58676 kubeadm.go:322] 
	I1101 01:06:35.207316   58676 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1101 01:06:35.207327   58676 kubeadm.go:322] 
	I1101 01:06:35.207404   58676 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1101 01:06:35.207413   58676 kubeadm.go:322] 
	I1101 01:06:35.207448   58676 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1101 01:06:35.207528   58676 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 01:06:35.207619   58676 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 01:06:35.207640   58676 kubeadm.go:322] 
	I1101 01:06:35.207703   58676 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1101 01:06:35.207722   58676 kubeadm.go:322] 
	I1101 01:06:35.207796   58676 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 01:06:35.207805   58676 kubeadm.go:322] 
	I1101 01:06:35.207878   58676 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1101 01:06:35.208001   58676 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 01:06:35.208102   58676 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 01:06:35.208111   58676 kubeadm.go:322] 
	I1101 01:06:35.208207   58676 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 01:06:35.208314   58676 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1101 01:06:35.208337   58676 kubeadm.go:322] 
	I1101 01:06:35.208459   58676 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ft1765.cra2ecqpjz8r5s0a \
	I1101 01:06:35.208636   58676 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 \
	I1101 01:06:35.208674   58676 kubeadm.go:322] 	--control-plane 
	I1101 01:06:35.208687   58676 kubeadm.go:322] 
	I1101 01:06:35.208812   58676 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1101 01:06:35.208823   58676 kubeadm.go:322] 
	I1101 01:06:35.208936   58676 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ft1765.cra2ecqpjz8r5s0a \
	I1101 01:06:35.209057   58676 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 
	I1101 01:06:35.209758   58676 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 01:06:35.209780   58676 cni.go:84] Creating CNI manager for ""
	I1101 01:06:35.209790   58676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:06:35.211735   58676 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:06:35.213123   58676 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:06:35.235025   58676 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:06:35.271015   58676 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 01:06:35.271092   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9 minikube.k8s.io/name=no-preload-008483 minikube.k8s.io/updated_at=2023_11_01T01_06_35_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:35.271099   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:35.305061   58676 ops.go:34] apiserver oom_adj: -16
	I1101 01:06:35.663339   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:35.805680   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:33.090990   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:33.091030   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:33.091038   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:33.091049   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:33.091060   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:33.091078   58823 retry.go:31] will retry after 2.252641435s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:35.350673   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:35.350703   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:35.350711   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:35.350722   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:35.350735   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:35.350753   58823 retry.go:31] will retry after 2.131984659s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:36.402770   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:36.902353   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:37.402763   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:37.902598   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:38.401883   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:38.902775   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:39.402062   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:39.902544   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:40.402350   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:40.901853   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:37.489100   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:37.489127   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:37.489132   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:37.489141   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:37.489151   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:37.489169   58823 retry.go:31] will retry after 3.273821759s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:40.767389   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:40.767409   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:40.767414   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:40.767421   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:40.767427   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:40.767441   58823 retry.go:31] will retry after 4.351278698s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:41.402632   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:41.901859   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:42.402379   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:42.902816   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:43.402503   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:43.902158   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:44.402562   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:44.901867   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:45.401852   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:45.902865   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:45.124108   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:45.124138   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:45.124147   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:45.124158   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:45.124166   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:45.124184   58823 retry.go:31] will retry after 4.53047058s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:46.402463   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:46.902480   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:47.402022   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:47.568628   58676 kubeadm.go:1081] duration metric: took 12.297606595s to wait for elevateKubeSystemPrivileges.
	I1101 01:06:47.568672   58676 kubeadm.go:406] StartCluster complete in 5m8.570526689s
	I1101 01:06:47.568696   58676 settings.go:142] acquiring lock: {Name:mk7f269e64dfd8d176737f993e01f6e6badafbc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:06:47.568787   58676 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 01:06:47.570839   58676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/kubeconfig: {Name:mk08da65b6c71084e1cfafb19800038e8c8303e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:06:47.571093   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 01:06:47.571207   58676 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1101 01:06:47.571281   58676 addons.go:69] Setting storage-provisioner=true in profile "no-preload-008483"
	I1101 01:06:47.571307   58676 addons.go:69] Setting metrics-server=true in profile "no-preload-008483"
	I1101 01:06:47.571329   58676 addons.go:231] Setting addon metrics-server=true in "no-preload-008483"
	I1101 01:06:47.571345   58676 config.go:182] Loaded profile config "no-preload-008483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:06:47.571360   58676 addons.go:69] Setting default-storageclass=true in profile "no-preload-008483"
	I1101 01:06:47.571369   58676 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-008483"
	W1101 01:06:47.571348   58676 addons.go:240] addon metrics-server should already be in state true
	I1101 01:06:47.571441   58676 host.go:66] Checking if "no-preload-008483" exists ...
	I1101 01:06:47.571312   58676 addons.go:231] Setting addon storage-provisioner=true in "no-preload-008483"
	W1101 01:06:47.571490   58676 addons.go:240] addon storage-provisioner should already be in state true
	I1101 01:06:47.571527   58676 host.go:66] Checking if "no-preload-008483" exists ...
	I1101 01:06:47.571816   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:47.571815   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:47.571873   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:47.571892   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:47.571873   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:47.572006   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:47.590259   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39063
	I1101 01:06:47.590724   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:47.591055   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39635
	I1101 01:06:47.591202   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:06:47.591220   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:47.591229   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46549
	I1101 01:06:47.591621   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:47.591707   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:47.591743   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:47.592428   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:47.592471   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:47.592794   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:06:47.592808   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:06:47.592822   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:47.592826   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:47.593236   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:47.593283   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:47.593437   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetState
	I1101 01:06:47.593927   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:47.593966   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:47.598345   58676 addons.go:231] Setting addon default-storageclass=true in "no-preload-008483"
	W1101 01:06:47.598381   58676 addons.go:240] addon default-storageclass should already be in state true
	I1101 01:06:47.598413   58676 host.go:66] Checking if "no-preload-008483" exists ...
	I1101 01:06:47.598819   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:47.598871   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:47.613965   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43751
	I1101 01:06:47.614004   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40855
	I1101 01:06:47.614542   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:47.614669   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:47.615105   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:06:47.615121   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:47.615151   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:06:47.615189   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:47.615476   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:47.615537   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:47.615690   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetState
	I1101 01:06:47.615767   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetState
	I1101 01:06:47.617847   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:06:47.620144   58676 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:06:47.618264   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45253
	I1101 01:06:47.618444   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:06:47.621319   58676 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-008483" context rescaled to 1 replicas
	I1101 01:06:47.621520   58676 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.140 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 01:06:47.623048   58676 out.go:177] * Verifying Kubernetes components...
	I1101 01:06:47.621641   58676 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:06:47.621894   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:47.625008   58676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 01:06:47.625024   58676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:06:47.626461   58676 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1101 01:06:47.628411   58676 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 01:06:47.628425   58676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 01:06:47.628439   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:06:47.626617   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:06:47.627063   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:06:47.628510   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:47.628907   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:47.629438   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:47.629480   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:47.631968   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:06:47.632175   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:06:47.632212   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:06:47.632315   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:06:47.632508   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:06:47.632679   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:06:47.632739   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:06:47.632795   58676 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa Username:docker}
	I1101 01:06:47.633383   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:06:47.633403   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:06:47.633427   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:06:47.633584   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:06:47.633708   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:06:47.633891   58676 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa Username:docker}
	I1101 01:06:47.650937   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41791
	I1101 01:06:47.651372   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:47.651921   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:06:47.651956   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:47.652322   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:47.652536   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetState
	I1101 01:06:47.654393   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:06:47.654706   58676 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 01:06:47.654721   58676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 01:06:47.654743   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:06:47.657743   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:06:47.658176   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:06:47.658204   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:06:47.658448   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:06:47.658673   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:06:47.658836   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:06:47.659008   58676 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa Username:docker}
	I1101 01:06:47.808648   58676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:06:47.837158   58676 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 01:06:47.837181   58676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1101 01:06:47.846004   58676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 01:06:47.882427   58676 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 01:06:47.882454   58676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 01:06:47.899419   58676 node_ready.go:35] waiting up to 6m0s for node "no-preload-008483" to be "Ready" ...
	I1101 01:06:47.899496   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 01:06:47.919788   58676 node_ready.go:49] node "no-preload-008483" has status "Ready":"True"
	I1101 01:06:47.919821   58676 node_ready.go:38] duration metric: took 20.370648ms waiting for node "no-preload-008483" to be "Ready" ...
	I1101 01:06:47.919836   58676 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:06:47.926205   58676 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:06:47.926232   58676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 01:06:47.930715   58676 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5tp9h" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:47.982413   58676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:06:49.813480   58676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.004790768s)
	I1101 01:06:49.813519   58676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.967476056s)
	I1101 01:06:49.813564   58676 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:49.813588   58676 main.go:141] libmachine: (no-preload-008483) Calling .Close
	I1101 01:06:49.813528   58676 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:49.813617   58676 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.914052615s)
	I1101 01:06:49.813634   58676 main.go:141] libmachine: (no-preload-008483) Calling .Close
	I1101 01:06:49.813643   58676 start.go:926] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1101 01:06:49.813924   58676 main.go:141] libmachine: (no-preload-008483) DBG | Closing plugin on server side
	I1101 01:06:49.813935   58676 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:49.813956   58676 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:49.813970   58676 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:49.813979   58676 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:49.813980   58676 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:49.813990   58676 main.go:141] libmachine: (no-preload-008483) Calling .Close
	I1101 01:06:49.813991   58676 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:49.814014   58676 main.go:141] libmachine: (no-preload-008483) Calling .Close
	I1101 01:06:49.814239   58676 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:49.814258   58676 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:49.814321   58676 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:49.814339   58676 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:49.857721   58676 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:49.857749   58676 main.go:141] libmachine: (no-preload-008483) Calling .Close
	I1101 01:06:49.858034   58676 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:49.858053   58676 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:50.026844   58676 pod_ready.go:97] error getting pod "coredns-5dd5756b68-5tp9h" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-5tp9h" not found
	I1101 01:06:50.026876   58676 pod_ready.go:81] duration metric: took 2.096134316s waiting for pod "coredns-5dd5756b68-5tp9h" in "kube-system" namespace to be "Ready" ...
	E1101 01:06:50.026888   58676 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-5tp9h" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-5tp9h" not found
	I1101 01:06:50.026898   58676 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-m8v7v" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:50.204452   58676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.22199218s)
	I1101 01:06:50.204543   58676 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:50.204561   58676 main.go:141] libmachine: (no-preload-008483) Calling .Close
	I1101 01:06:50.204896   58676 main.go:141] libmachine: (no-preload-008483) DBG | Closing plugin on server side
	I1101 01:06:50.204985   58676 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:50.205017   58676 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:50.205046   58676 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:50.205064   58676 main.go:141] libmachine: (no-preload-008483) Calling .Close
	I1101 01:06:50.205339   58676 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:50.205360   58676 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:50.205371   58676 addons.go:467] Verifying addon metrics-server=true in "no-preload-008483"
	I1101 01:06:50.205393   58676 main.go:141] libmachine: (no-preload-008483) DBG | Closing plugin on server side
	I1101 01:06:50.207552   58676 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1101 01:06:50.208879   58676 addons.go:502] enable addons completed in 2.637673191s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1101 01:06:49.663546   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:49.663578   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:49.663585   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:49.663595   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:49.663604   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:49.663623   58823 retry.go:31] will retry after 5.557220121s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:52.106184   58676 pod_ready.go:92] pod "coredns-5dd5756b68-m8v7v" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:52.106208   58676 pod_ready.go:81] duration metric: took 2.079304042s waiting for pod "coredns-5dd5756b68-m8v7v" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.106218   58676 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.112508   58676 pod_ready.go:92] pod "etcd-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:52.112531   58676 pod_ready.go:81] duration metric: took 6.307404ms waiting for pod "etcd-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.112540   58676 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.119263   58676 pod_ready.go:92] pod "kube-apiserver-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:52.119296   58676 pod_ready.go:81] duration metric: took 6.748553ms waiting for pod "kube-apiserver-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.119311   58676 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.125594   58676 pod_ready.go:92] pod "kube-controller-manager-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:52.125619   58676 pod_ready.go:81] duration metric: took 6.30051ms waiting for pod "kube-controller-manager-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.125629   58676 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4cx5t" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.503777   58676 pod_ready.go:92] pod "kube-proxy-4cx5t" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:52.503802   58676 pod_ready.go:81] duration metric: took 378.166648ms waiting for pod "kube-proxy-4cx5t" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.503811   58676 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.904254   58676 pod_ready.go:92] pod "kube-scheduler-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:52.904275   58676 pod_ready.go:81] duration metric: took 400.457426ms waiting for pod "kube-scheduler-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.904284   58676 pod_ready.go:38] duration metric: took 4.984437509s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:06:52.904303   58676 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:06:52.904352   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:06:52.917549   58676 api_server.go:72] duration metric: took 5.295984843s to wait for apiserver process to appear ...
	I1101 01:06:52.917576   58676 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:06:52.917595   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:06:52.926515   58676 api_server.go:279] https://192.168.50.140:8443/healthz returned 200:
	ok
	I1101 01:06:52.927673   58676 api_server.go:141] control plane version: v1.28.3
	I1101 01:06:52.927692   58676 api_server.go:131] duration metric: took 10.109726ms to wait for apiserver health ...
	I1101 01:06:52.927700   58676 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:06:53.109620   58676 system_pods.go:59] 8 kube-system pods found
	I1101 01:06:53.109648   58676 system_pods.go:61] "coredns-5dd5756b68-m8v7v" [351a9458-075b-40d1-96d1-86a450a99251] Running
	I1101 01:06:53.109653   58676 system_pods.go:61] "etcd-no-preload-008483" [e1db4a59-f5e6-4114-a942-1faf4ff84af2] Running
	I1101 01:06:53.109657   58676 system_pods.go:61] "kube-apiserver-no-preload-008483" [f8f8bb39-3093-44bb-8255-5a7d78437a75] Running
	I1101 01:06:53.109661   58676 system_pods.go:61] "kube-controller-manager-no-preload-008483" [a45df9e4-3399-4c21-981f-3c3caaed52a8] Running
	I1101 01:06:53.109665   58676 system_pods.go:61] "kube-proxy-4cx5t" [57c1e87a-aa14-440d-9001-a6ba2ab7c8c6] Running
	I1101 01:06:53.109670   58676 system_pods.go:61] "kube-scheduler-no-preload-008483" [329b7a2d-6146-4e08-910e-ed4d40f57dcb] Running
	I1101 01:06:53.109676   58676 system_pods.go:61] "metrics-server-57f55c9bc5-qcxt7" [bf444b92-dd54-43fc-a9a8-0e9000b562e3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:53.109684   58676 system_pods.go:61] "storage-provisioner" [909163da-9021-4cee-9a72-1bc9b6ae9390] Running
	I1101 01:06:53.109693   58676 system_pods.go:74] duration metric: took 181.986766ms to wait for pod list to return data ...
	I1101 01:06:53.109706   58676 default_sa.go:34] waiting for default service account to be created ...
	I1101 01:06:53.305872   58676 default_sa.go:45] found service account: "default"
	I1101 01:06:53.305904   58676 default_sa.go:55] duration metric: took 196.187269ms for default service account to be created ...
	I1101 01:06:53.305919   58676 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 01:06:53.506566   58676 system_pods.go:86] 8 kube-system pods found
	I1101 01:06:53.506601   58676 system_pods.go:89] "coredns-5dd5756b68-m8v7v" [351a9458-075b-40d1-96d1-86a450a99251] Running
	I1101 01:06:53.506610   58676 system_pods.go:89] "etcd-no-preload-008483" [e1db4a59-f5e6-4114-a942-1faf4ff84af2] Running
	I1101 01:06:53.506618   58676 system_pods.go:89] "kube-apiserver-no-preload-008483" [f8f8bb39-3093-44bb-8255-5a7d78437a75] Running
	I1101 01:06:53.506625   58676 system_pods.go:89] "kube-controller-manager-no-preload-008483" [a45df9e4-3399-4c21-981f-3c3caaed52a8] Running
	I1101 01:06:53.506631   58676 system_pods.go:89] "kube-proxy-4cx5t" [57c1e87a-aa14-440d-9001-a6ba2ab7c8c6] Running
	I1101 01:06:53.506640   58676 system_pods.go:89] "kube-scheduler-no-preload-008483" [329b7a2d-6146-4e08-910e-ed4d40f57dcb] Running
	I1101 01:06:53.506651   58676 system_pods.go:89] "metrics-server-57f55c9bc5-qcxt7" [bf444b92-dd54-43fc-a9a8-0e9000b562e3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:53.506664   58676 system_pods.go:89] "storage-provisioner" [909163da-9021-4cee-9a72-1bc9b6ae9390] Running
	I1101 01:06:53.506675   58676 system_pods.go:126] duration metric: took 200.749464ms to wait for k8s-apps to be running ...
	I1101 01:06:53.506692   58676 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 01:06:53.506747   58676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:06:53.519471   58676 system_svc.go:56] duration metric: took 12.766173ms WaitForService to wait for kubelet.
	I1101 01:06:53.519502   58676 kubeadm.go:581] duration metric: took 5.897944072s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1101 01:06:53.519525   58676 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:06:53.705460   58676 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:06:53.705490   58676 node_conditions.go:123] node cpu capacity is 2
	I1101 01:06:53.705501   58676 node_conditions.go:105] duration metric: took 185.970851ms to run NodePressure ...
	I1101 01:06:53.705515   58676 start.go:228] waiting for startup goroutines ...
	I1101 01:06:53.705523   58676 start.go:233] waiting for cluster config update ...
	I1101 01:06:53.705537   58676 start.go:242] writing updated cluster config ...
	I1101 01:06:53.705824   58676 ssh_runner.go:195] Run: rm -f paused
	I1101 01:06:53.758508   58676 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1101 01:06:53.761998   58676 out.go:177] * Done! kubectl is now configured to use "no-preload-008483" cluster and "default" namespace by default
	I1101 01:06:55.226416   58823 system_pods.go:86] 5 kube-system pods found
	I1101 01:06:55.226443   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:55.226449   58823 system_pods.go:89] "kube-apiserver-old-k8s-version-330042" [1d813832-7c56-439f-aee9-c5c326e6cd3d] Pending
	I1101 01:06:55.226453   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:55.226460   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:55.226466   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:55.226480   58823 retry.go:31] will retry after 6.901184226s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:07:02.133379   58823 system_pods.go:86] 5 kube-system pods found
	I1101 01:07:02.133412   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:07:02.133421   58823 system_pods.go:89] "kube-apiserver-old-k8s-version-330042" [1d813832-7c56-439f-aee9-c5c326e6cd3d] Running
	I1101 01:07:02.133427   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:07:02.133442   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:07:02.133451   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:07:02.133471   58823 retry.go:31] will retry after 10.272464072s: missing components: etcd, kube-controller-manager, kube-scheduler
	I1101 01:07:12.412133   58823 system_pods.go:86] 5 kube-system pods found
	I1101 01:07:12.412166   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:07:12.412175   58823 system_pods.go:89] "kube-apiserver-old-k8s-version-330042" [1d813832-7c56-439f-aee9-c5c326e6cd3d] Running
	I1101 01:07:12.412181   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:07:12.412193   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:07:12.412202   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:07:12.412221   58823 retry.go:31] will retry after 11.290918588s: missing components: etcd, kube-controller-manager, kube-scheduler
	I1101 01:07:23.709462   58823 system_pods.go:86] 8 kube-system pods found
	I1101 01:07:23.709495   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:07:23.709503   58823 system_pods.go:89] "etcd-old-k8s-version-330042" [fc62fe53-9611-4b3d-9dca-a30d58618b2b] Running
	I1101 01:07:23.709510   58823 system_pods.go:89] "kube-apiserver-old-k8s-version-330042" [1d813832-7c56-439f-aee9-c5c326e6cd3d] Running
	I1101 01:07:23.709517   58823 system_pods.go:89] "kube-controller-manager-old-k8s-version-330042" [8ad0ccf9-fa8e-4205-b89c-f5f57cb7be6e] Running
	I1101 01:07:23.709524   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:07:23.709528   58823 system_pods.go:89] "kube-scheduler-old-k8s-version-330042" [2b077f6b-8077-4ccb-93c2-c6d3383b1113] Pending
	I1101 01:07:23.709534   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:07:23.709543   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:07:23.709559   58823 retry.go:31] will retry after 12.900513481s: missing components: kube-scheduler
	I1101 01:07:36.615720   58823 system_pods.go:86] 8 kube-system pods found
	I1101 01:07:36.615746   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:07:36.615751   58823 system_pods.go:89] "etcd-old-k8s-version-330042" [fc62fe53-9611-4b3d-9dca-a30d58618b2b] Running
	I1101 01:07:36.615756   58823 system_pods.go:89] "kube-apiserver-old-k8s-version-330042" [1d813832-7c56-439f-aee9-c5c326e6cd3d] Running
	I1101 01:07:36.615760   58823 system_pods.go:89] "kube-controller-manager-old-k8s-version-330042" [8ad0ccf9-fa8e-4205-b89c-f5f57cb7be6e] Running
	I1101 01:07:36.615763   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:07:36.615767   58823 system_pods.go:89] "kube-scheduler-old-k8s-version-330042" [2b077f6b-8077-4ccb-93c2-c6d3383b1113] Running
	I1101 01:07:36.615774   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:07:36.615780   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:07:36.615787   58823 system_pods.go:126] duration metric: took 1m10.123228938s to wait for k8s-apps to be running ...
	I1101 01:07:36.615793   58823 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 01:07:36.615837   58823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:07:36.634354   58823 system_svc.go:56] duration metric: took 18.547208ms WaitForService to wait for kubelet.
	I1101 01:07:36.634387   58823 kubeadm.go:581] duration metric: took 1m19.896166299s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1101 01:07:36.634412   58823 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:07:36.638286   58823 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:07:36.638315   58823 node_conditions.go:123] node cpu capacity is 2
	I1101 01:07:36.638329   58823 node_conditions.go:105] duration metric: took 3.911826ms to run NodePressure ...
	I1101 01:07:36.638344   58823 start.go:228] waiting for startup goroutines ...
	I1101 01:07:36.638351   58823 start.go:233] waiting for cluster config update ...
	I1101 01:07:36.638365   58823 start.go:242] writing updated cluster config ...
	I1101 01:07:36.638658   58823 ssh_runner.go:195] Run: rm -f paused
	I1101 01:07:36.688409   58823 start.go:600] kubectl: 1.28.3, cluster: 1.16.0 (minor skew: 12)
	I1101 01:07:36.690520   58823 out.go:177] 
	W1101 01:07:36.692006   58823 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.16.0.
	I1101 01:07:36.693512   58823 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1101 01:07:36.694940   58823 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-330042" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-11-01 01:00:27 UTC, ends at Wed 2023-11-01 01:16:38 UTC. --
	Nov 01 01:16:38 old-k8s-version-330042 crio[712]: time="2023-11-01 01:16:38.396722204Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e59d0891-41db-46e6-a10d-4fad91ecd4bd name=/runtime.v1.RuntimeService/Version
	Nov 01 01:16:38 old-k8s-version-330042 crio[712]: time="2023-11-01 01:16:38.398151656Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=981b3664-7f83-45ab-9ddf-ba4f58ffe073 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:16:38 old-k8s-version-330042 crio[712]: time="2023-11-01 01:16:38.398564774Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698801398398550171,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=981b3664-7f83-45ab-9ddf-ba4f58ffe073 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:16:38 old-k8s-version-330042 crio[712]: time="2023-11-01 01:16:38.399302264Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e336f769-c1ef-4266-afe3-0b6c2029aedd name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:16:38 old-k8s-version-330042 crio[712]: time="2023-11-01 01:16:38.399354683Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e336f769-c1ef-4266-afe3-0b6c2029aedd name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:16:38 old-k8s-version-330042 crio[712]: time="2023-11-01 01:16:38.399587170Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47e9d096bab623b41a163040180ac989be703b43ed4158dcada9550cc356baa9,PodSandboxId:2bc0679301c92cfefb4fc946b72ac70b853adec0652e63faad70865a6e3e089a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698800779724593813,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dd1f9a9-5780-44ca-b917-4262b661d705,},Annotations:map[string]string{io.kubernetes.container.hash: d3681a08,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84aa2ac725186095a8531ea178ce728ecdc22eb3a5421d8a7793c380fd0b91db,PodSandboxId:fe82f92e3388f19a12451370d3b51420c9825b83e5d3121a1746fda4129e6e4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1698800778947822613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-v2xlz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36626c20-6011-458b-a4a0-3b20dd0a2d7d,},Annotations:map[string]string{io.kubernetes.container.hash: 9de5e7d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cea2a8f7c286807900a09209205034982d97ab11615435f0759431aa7dbb1cf,PodSandboxId:4fbda9ea40dbabd32abb80de20e1cbcb8132cd9236bc271e994c1073123cf8f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1698800778369009032,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hkl2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea52a
4a6-d4d0-4ffe-892b-57869eddeb19,},Annotations:map[string]string{io.kubernetes.container.hash: 3b669d41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d7a337b3484e33b88c27bb98b7190ca96a1228f4be92caf932b3ad008d9c1a1,PodSandboxId:e95d110a3c8bbfb2defb6c7b519f669f7b828ba07a94fa43130175e79f65246c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1698800752089986016,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e364ddc19ecd628024c426c1c99940aa,},Annotations:map[s
tring]string{io.kubernetes.container.hash: aac46c06,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c72bd307dc95d8a490f8ce186c9e0fd7d636bd82e0b07ae130b68caa14fa8ef,PodSandboxId:d1deaf65d94fa2b0967a9422ded210de010414fe098352937de22790ee3ef39e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1698800750901816502,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aab85e7b72354e61671d1808369ec300,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 464e7b7e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6dfc40f684bae2ed88ef5e956c1cc1b727a6db7dd14095504543757767d170f,PodSandboxId:64bcf463ea572198a70b221d1472e002c43c80cb8ca5a7bb3b833fe920a08491,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1698800750828482948,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubern
etes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2112d10fa84b67324dec92087b40581b328307a5cb69e922e1c3a8a63343920c,PodSandboxId:6970e3a8abc6a6a707074731218947867e4bd7285ab87c10ea35079c3640755d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1698800750747524609,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e336f769-c1ef-4266-afe3-0b6c2029aedd name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:16:38 old-k8s-version-330042 crio[712]: time="2023-11-01 01:16:38.443237057Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=f6051fc0-f633-46d0-bf26-37fbc41af796 name=/runtime.v1.RuntimeService/Version
	Nov 01 01:16:38 old-k8s-version-330042 crio[712]: time="2023-11-01 01:16:38.443317028Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f6051fc0-f633-46d0-bf26-37fbc41af796 name=/runtime.v1.RuntimeService/Version
	Nov 01 01:16:38 old-k8s-version-330042 crio[712]: time="2023-11-01 01:16:38.445133619Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8cb1c3fe-eb8a-4774-866a-be507762ba8f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:16:38 old-k8s-version-330042 crio[712]: time="2023-11-01 01:16:38.445564765Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698801398445550555,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=8cb1c3fe-eb8a-4774-866a-be507762ba8f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:16:38 old-k8s-version-330042 crio[712]: time="2023-11-01 01:16:38.446221490Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ef8fe5db-4933-41a9-833e-cefe2f9d2977 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:16:38 old-k8s-version-330042 crio[712]: time="2023-11-01 01:16:38.446272574Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ef8fe5db-4933-41a9-833e-cefe2f9d2977 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:16:38 old-k8s-version-330042 crio[712]: time="2023-11-01 01:16:38.446437177Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47e9d096bab623b41a163040180ac989be703b43ed4158dcada9550cc356baa9,PodSandboxId:2bc0679301c92cfefb4fc946b72ac70b853adec0652e63faad70865a6e3e089a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698800779724593813,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dd1f9a9-5780-44ca-b917-4262b661d705,},Annotations:map[string]string{io.kubernetes.container.hash: d3681a08,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84aa2ac725186095a8531ea178ce728ecdc22eb3a5421d8a7793c380fd0b91db,PodSandboxId:fe82f92e3388f19a12451370d3b51420c9825b83e5d3121a1746fda4129e6e4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1698800778947822613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-v2xlz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36626c20-6011-458b-a4a0-3b20dd0a2d7d,},Annotations:map[string]string{io.kubernetes.container.hash: 9de5e7d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cea2a8f7c286807900a09209205034982d97ab11615435f0759431aa7dbb1cf,PodSandboxId:4fbda9ea40dbabd32abb80de20e1cbcb8132cd9236bc271e994c1073123cf8f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1698800778369009032,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hkl2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea52a
4a6-d4d0-4ffe-892b-57869eddeb19,},Annotations:map[string]string{io.kubernetes.container.hash: 3b669d41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d7a337b3484e33b88c27bb98b7190ca96a1228f4be92caf932b3ad008d9c1a1,PodSandboxId:e95d110a3c8bbfb2defb6c7b519f669f7b828ba07a94fa43130175e79f65246c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1698800752089986016,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e364ddc19ecd628024c426c1c99940aa,},Annotations:map[s
tring]string{io.kubernetes.container.hash: aac46c06,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c72bd307dc95d8a490f8ce186c9e0fd7d636bd82e0b07ae130b68caa14fa8ef,PodSandboxId:d1deaf65d94fa2b0967a9422ded210de010414fe098352937de22790ee3ef39e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1698800750901816502,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aab85e7b72354e61671d1808369ec300,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 464e7b7e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6dfc40f684bae2ed88ef5e956c1cc1b727a6db7dd14095504543757767d170f,PodSandboxId:64bcf463ea572198a70b221d1472e002c43c80cb8ca5a7bb3b833fe920a08491,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1698800750828482948,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubern
etes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2112d10fa84b67324dec92087b40581b328307a5cb69e922e1c3a8a63343920c,PodSandboxId:6970e3a8abc6a6a707074731218947867e4bd7285ab87c10ea35079c3640755d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1698800750747524609,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ef8fe5db-4933-41a9-833e-cefe2f9d2977 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:16:38 old-k8s-version-330042 crio[712]: time="2023-11-01 01:16:38.480313062Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ee3e8929-891a-45f3-88de-a06b0bb7d81c name=/runtime.v1.RuntimeService/Version
	Nov 01 01:16:38 old-k8s-version-330042 crio[712]: time="2023-11-01 01:16:38.480370637Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ee3e8929-891a-45f3-88de-a06b0bb7d81c name=/runtime.v1.RuntimeService/Version
	Nov 01 01:16:38 old-k8s-version-330042 crio[712]: time="2023-11-01 01:16:38.481736322Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8114c497-636a-4b7d-acea-35a24331002a name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:16:38 old-k8s-version-330042 crio[712]: time="2023-11-01 01:16:38.482202425Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698801398482185739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=8114c497-636a-4b7d-acea-35a24331002a name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:16:38 old-k8s-version-330042 crio[712]: time="2023-11-01 01:16:38.482702426Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c425961c-c5e4-4860-8d7d-2b2849131d9e name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:16:38 old-k8s-version-330042 crio[712]: time="2023-11-01 01:16:38.482750040Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c425961c-c5e4-4860-8d7d-2b2849131d9e name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:16:38 old-k8s-version-330042 crio[712]: time="2023-11-01 01:16:38.482907577Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47e9d096bab623b41a163040180ac989be703b43ed4158dcada9550cc356baa9,PodSandboxId:2bc0679301c92cfefb4fc946b72ac70b853adec0652e63faad70865a6e3e089a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698800779724593813,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dd1f9a9-5780-44ca-b917-4262b661d705,},Annotations:map[string]string{io.kubernetes.container.hash: d3681a08,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84aa2ac725186095a8531ea178ce728ecdc22eb3a5421d8a7793c380fd0b91db,PodSandboxId:fe82f92e3388f19a12451370d3b51420c9825b83e5d3121a1746fda4129e6e4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1698800778947822613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-v2xlz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36626c20-6011-458b-a4a0-3b20dd0a2d7d,},Annotations:map[string]string{io.kubernetes.container.hash: 9de5e7d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cea2a8f7c286807900a09209205034982d97ab11615435f0759431aa7dbb1cf,PodSandboxId:4fbda9ea40dbabd32abb80de20e1cbcb8132cd9236bc271e994c1073123cf8f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1698800778369009032,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hkl2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea52a
4a6-d4d0-4ffe-892b-57869eddeb19,},Annotations:map[string]string{io.kubernetes.container.hash: 3b669d41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d7a337b3484e33b88c27bb98b7190ca96a1228f4be92caf932b3ad008d9c1a1,PodSandboxId:e95d110a3c8bbfb2defb6c7b519f669f7b828ba07a94fa43130175e79f65246c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1698800752089986016,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e364ddc19ecd628024c426c1c99940aa,},Annotations:map[s
tring]string{io.kubernetes.container.hash: aac46c06,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c72bd307dc95d8a490f8ce186c9e0fd7d636bd82e0b07ae130b68caa14fa8ef,PodSandboxId:d1deaf65d94fa2b0967a9422ded210de010414fe098352937de22790ee3ef39e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1698800750901816502,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aab85e7b72354e61671d1808369ec300,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 464e7b7e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6dfc40f684bae2ed88ef5e956c1cc1b727a6db7dd14095504543757767d170f,PodSandboxId:64bcf463ea572198a70b221d1472e002c43c80cb8ca5a7bb3b833fe920a08491,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1698800750828482948,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubern
etes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2112d10fa84b67324dec92087b40581b328307a5cb69e922e1c3a8a63343920c,PodSandboxId:6970e3a8abc6a6a707074731218947867e4bd7285ab87c10ea35079c3640755d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1698800750747524609,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c425961c-c5e4-4860-8d7d-2b2849131d9e name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:16:38 old-k8s-version-330042 crio[712]: time="2023-11-01 01:16:38.494631850Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=bc10bc24-892e-4709-b7da-27cf35f7e43e name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Nov 01 01:16:38 old-k8s-version-330042 crio[712]: time="2023-11-01 01:16:38.494841396Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:7e04121e7702a139845d102c9015efa9c12288cc6f10c0b394b5229c0ed7ee29,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5856cc6-m5v28,Uid:df9123d5-270d-4eac-8801-b4ef14c72ce0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698800779355736013,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5856cc6-m5v28,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df9123d5-270d-4eac-8801-b4ef14c72ce0,k8s-app: metrics-server,pod-template-hash: 74d5856cc6,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-01T01:06:19.009450675Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2bc0679301c92cfefb4fc946b72ac70b853adec0652e63faad70865a6e3e089a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1dd1f9a9-5780-44ca-b917-4262b661d7
05,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698800779132007730,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dd1f9a9-5780-44ca-b917-4262b661d705,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\
"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-11-01T01:06:17.88153306Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fe82f92e3388f19a12451370d3b51420c9825b83e5d3121a1746fda4129e6e4c,Metadata:&PodSandboxMetadata{Name:coredns-5644d7b6d9-v2xlz,Uid:36626c20-6011-458b-a4a0-3b20dd0a2d7d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698800778736355007,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5644d7b6d9-v2xlz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36626c20-6011-458b-a4a0-3b20dd0a2d7d,k8s-app: kube-dns,pod-template-hash: 5644d7b6d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-01T01:06:18.389957639Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4fbda9ea40dbabd32abb80de20e1cbcb8132cd9236bc271e994c1073123cf8f9,Metadata:&PodSandboxMetadata{Name:kube-proxy-hkl2m,Uid:ea52a4a6-d4d0-4ffe-892b
-57869eddeb19,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698800776856377475,Labels:map[string]string{controller-revision-hash: 68594d95c,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-hkl2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea52a4a6-d4d0-4ffe-892b-57869eddeb19,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-01T01:06:16.501178617Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:64bcf463ea572198a70b221d1472e002c43c80cb8ca5a7bb3b833fe920a08491,Metadata:&PodSandboxMetadata{Name:kube-scheduler-old-k8s-version-330042,Uid:b3d303074fe0ca1d42a8bd9ed248df09,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698800750138206653,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1
d42a8bd9ed248df09,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b3d303074fe0ca1d42a8bd9ed248df09,kubernetes.io/config.seen: 2023-11-01T01:05:49.721679819Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6970e3a8abc6a6a707074731218947867e4bd7285ab87c10ea35079c3640755d,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-old-k8s-version-330042,Uid:7376ddb4f190a0ded9394063437bcb4e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698800750133193370,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7376ddb4f190a0ded9394063437bcb4e,kubernetes.io/config.seen: 2023-11-01T01:05:49.718683781Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id
:e95d110a3c8bbfb2defb6c7b519f669f7b828ba07a94fa43130175e79f65246c,Metadata:&PodSandboxMetadata{Name:etcd-old-k8s-version-330042,Uid:e364ddc19ecd628024c426c1c99940aa,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698800750111824889,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e364ddc19ecd628024c426c1c99940aa,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e364ddc19ecd628024c426c1c99940aa,kubernetes.io/config.seen: 2023-11-01T01:05:49.72332727Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d1deaf65d94fa2b0967a9422ded210de010414fe098352937de22790ee3ef39e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-old-k8s-version-330042,Uid:aab85e7b72354e61671d1808369ec300,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698800750089812181,Labels:map[string]string{component: kube-apiserver,io.k
ubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aab85e7b72354e61671d1808369ec300,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: aab85e7b72354e61671d1808369ec300,kubernetes.io/config.seen: 2023-11-01T01:05:49.717467772Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=bc10bc24-892e-4709-b7da-27cf35f7e43e name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Nov 01 01:16:38 old-k8s-version-330042 crio[712]: time="2023-11-01 01:16:38.495960224Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=013384ea-34b7-4ad5-9590-192e24668232 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Nov 01 01:16:38 old-k8s-version-330042 crio[712]: time="2023-11-01 01:16:38.496017394Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=013384ea-34b7-4ad5-9590-192e24668232 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Nov 01 01:16:38 old-k8s-version-330042 crio[712]: time="2023-11-01 01:16:38.496278296Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47e9d096bab623b41a163040180ac989be703b43ed4158dcada9550cc356baa9,PodSandboxId:2bc0679301c92cfefb4fc946b72ac70b853adec0652e63faad70865a6e3e089a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698800779724593813,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dd1f9a9-5780-44ca-b917-4262b661d705,},Annotations:map[string]string{io.kubernetes.container.hash: d3681a08,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84aa2ac725186095a8531ea178ce728ecdc22eb3a5421d8a7793c380fd0b91db,PodSandboxId:fe82f92e3388f19a12451370d3b51420c9825b83e5d3121a1746fda4129e6e4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1698800778947822613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-v2xlz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36626c20-6011-458b-a4a0-3b20dd0a2d7d,},Annotations:map[string]string{io.kubernetes.container.hash: 9de5e7d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cea2a8f7c286807900a09209205034982d97ab11615435f0759431aa7dbb1cf,PodSandboxId:4fbda9ea40dbabd32abb80de20e1cbcb8132cd9236bc271e994c1073123cf8f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1698800778369009032,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hkl2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea52a
4a6-d4d0-4ffe-892b-57869eddeb19,},Annotations:map[string]string{io.kubernetes.container.hash: 3b669d41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d7a337b3484e33b88c27bb98b7190ca96a1228f4be92caf932b3ad008d9c1a1,PodSandboxId:e95d110a3c8bbfb2defb6c7b519f669f7b828ba07a94fa43130175e79f65246c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1698800752089986016,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e364ddc19ecd628024c426c1c99940aa,},Annotations:map[s
tring]string{io.kubernetes.container.hash: aac46c06,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c72bd307dc95d8a490f8ce186c9e0fd7d636bd82e0b07ae130b68caa14fa8ef,PodSandboxId:d1deaf65d94fa2b0967a9422ded210de010414fe098352937de22790ee3ef39e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1698800750901816502,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aab85e7b72354e61671d1808369ec300,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 464e7b7e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6dfc40f684bae2ed88ef5e956c1cc1b727a6db7dd14095504543757767d170f,PodSandboxId:64bcf463ea572198a70b221d1472e002c43c80cb8ca5a7bb3b833fe920a08491,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1698800750828482948,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubern
etes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2112d10fa84b67324dec92087b40581b328307a5cb69e922e1c3a8a63343920c,PodSandboxId:6970e3a8abc6a6a707074731218947867e4bd7285ab87c10ea35079c3640755d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1698800750747524609,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=013384ea-34b7-4ad5-9590-192e24668232 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	47e9d096bab62       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   10 minutes ago      Running             storage-provisioner       0                   2bc0679301c92       storage-provisioner
	84aa2ac725186       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   10 minutes ago      Running             coredns                   0                   fe82f92e3388f       coredns-5644d7b6d9-v2xlz
	7cea2a8f7c286       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   10 minutes ago      Running             kube-proxy                0                   4fbda9ea40dba       kube-proxy-hkl2m
	0d7a337b3484e       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   10 minutes ago      Running             etcd                      0                   e95d110a3c8bb       etcd-old-k8s-version-330042
	9c72bd307dc95       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   10 minutes ago      Running             kube-apiserver            0                   d1deaf65d94fa       kube-apiserver-old-k8s-version-330042
	a6dfc40f684ba       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   10 minutes ago      Running             kube-scheduler            0                   64bcf463ea572       kube-scheduler-old-k8s-version-330042
	2112d10fa84b6       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   10 minutes ago      Running             kube-controller-manager   0                   6970e3a8abc6a       kube-controller-manager-old-k8s-version-330042
	
	* 
	* ==> coredns [84aa2ac725186095a8531ea178ce728ecdc22eb3a5421d8a7793c380fd0b91db] <==
	* .:53
	2023-11-01T01:06:19.212Z [INFO] plugin/reload: Running configuration MD5 = 73c7bdb6903c83cd433a46b2e9eb4233
	2023-11-01T01:06:19.212Z [INFO] CoreDNS-1.6.2
	2023-11-01T01:06:19.212Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2023-11-01T01:06:19.225Z [INFO] 127.0.0.1:55625 - 16103 "HINFO IN 3101082495356081793.6221192527272173986. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011553843s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-330042
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-330042
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9
	                    minikube.k8s.io/name=old-k8s-version-330042
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_01T01_06_01_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Nov 2023 01:05:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Nov 2023 01:16:16 +0000   Wed, 01 Nov 2023 01:05:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Nov 2023 01:16:16 +0000   Wed, 01 Nov 2023 01:05:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Nov 2023 01:16:16 +0000   Wed, 01 Nov 2023 01:05:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Nov 2023 01:16:16 +0000   Wed, 01 Nov 2023 01:05:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.90
	  Hostname:    old-k8s-version-330042
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 78eee0e52c544393a797354bd60373a7
	 System UUID:                78eee0e5-2c54-4393-a797-354bd60373a7
	 Boot ID:                    0eca7327-765c-4eae-b17e-bcbd0aff4118
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-v2xlz                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                etcd-old-k8s-version-330042                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m25s
	  kube-system                kube-apiserver-old-k8s-version-330042             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m44s
	  kube-system                kube-controller-manager-old-k8s-version-330042    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                kube-proxy-hkl2m                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                kube-scheduler-old-k8s-version-330042             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m16s
	  kube-system                metrics-server-74d5856cc6-m5v28                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet, old-k8s-version-330042     Node old-k8s-version-330042 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x7 over 10m)  kubelet, old-k8s-version-330042     Node old-k8s-version-330042 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet, old-k8s-version-330042     Node old-k8s-version-330042 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                kube-proxy, old-k8s-version-330042  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Nov 1 01:00] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.064201] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.477165] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.926122] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.141967] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.564915] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.603308] systemd-fstab-generator[638]: Ignoring "noauto" for root device
	[  +0.119192] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.153404] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.107213] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.235019] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[ +20.625955] systemd-fstab-generator[1030]: Ignoring "noauto" for root device
	[  +0.510958] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov 1 01:01] kauditd_printk_skb: 13 callbacks suppressed
	[ +31.144458] kauditd_printk_skb: 4 callbacks suppressed
	[Nov 1 01:05] systemd-fstab-generator[3090]: Ignoring "noauto" for root device
	[  +0.812286] kauditd_printk_skb: 6 callbacks suppressed
	[Nov 1 01:06] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [0d7a337b3484e33b88c27bb98b7190ca96a1228f4be92caf932b3ad008d9c1a1] <==
	* 2023-11-01 01:05:52.202329 I | raft: 8d381aaacda0b9bd became follower at term 0
	2023-11-01 01:05:52.202357 I | raft: newRaft 8d381aaacda0b9bd [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-11-01 01:05:52.202378 I | raft: 8d381aaacda0b9bd became follower at term 1
	2023-11-01 01:05:52.221732 W | auth: simple token is not cryptographically signed
	2023-11-01 01:05:52.227798 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-11-01 01:05:52.233543 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-11-01 01:05:52.233802 I | embed: listening for metrics on http://192.168.39.90:2381
	2023-11-01 01:05:52.234130 I | etcdserver: 8d381aaacda0b9bd as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-11-01 01:05:52.234380 I | etcdserver/membership: added member 8d381aaacda0b9bd [https://192.168.39.90:2380] to cluster 8cf3a1558a63fa9e
	2023-11-01 01:05:52.234434 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-11-01 01:05:52.903003 I | raft: 8d381aaacda0b9bd is starting a new election at term 1
	2023-11-01 01:05:52.903118 I | raft: 8d381aaacda0b9bd became candidate at term 2
	2023-11-01 01:05:52.903132 I | raft: 8d381aaacda0b9bd received MsgVoteResp from 8d381aaacda0b9bd at term 2
	2023-11-01 01:05:52.903142 I | raft: 8d381aaacda0b9bd became leader at term 2
	2023-11-01 01:05:52.903147 I | raft: raft.node: 8d381aaacda0b9bd elected leader 8d381aaacda0b9bd at term 2
	2023-11-01 01:05:52.903600 I | etcdserver: setting up the initial cluster version to 3.3
	2023-11-01 01:05:52.905020 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-11-01 01:05:52.905482 I | etcdserver: published {Name:old-k8s-version-330042 ClientURLs:[https://192.168.39.90:2379]} to cluster 8cf3a1558a63fa9e
	2023-11-01 01:05:52.905622 I | embed: ready to serve client requests
	2023-11-01 01:05:52.906884 I | embed: serving client requests on 192.168.39.90:2379
	2023-11-01 01:05:52.906961 I | embed: ready to serve client requests
	2023-11-01 01:05:52.907161 I | etcdserver/api: enabled capabilities for version 3.3
	2023-11-01 01:05:52.908241 I | embed: serving client requests on 127.0.0.1:2379
	2023-11-01 01:15:52.931227 I | mvcc: store.index: compact 661
	2023-11-01 01:15:52.932914 I | mvcc: finished scheduled compaction at 661 (took 1.288205ms)
	
	* 
	* ==> kernel <==
	*  01:16:38 up 16 min,  0 users,  load average: 0.30, 0.15, 0.11
	Linux old-k8s-version-330042 5.10.57 #1 SMP Tue Oct 31 22:14:31 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [9c72bd307dc95d8a490f8ce186c9e0fd7d636bd82e0b07ae130b68caa14fa8ef] <==
	* I1101 01:09:19.725739       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1101 01:09:19.725870       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 01:09:19.725927       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1101 01:09:19.725935       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1101 01:10:57.375184       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1101 01:10:57.375445       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 01:10:57.375548       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1101 01:10:57.375578       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1101 01:11:57.376023       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1101 01:11:57.376409       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 01:11:57.376574       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1101 01:11:57.376617       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1101 01:13:57.377248       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1101 01:13:57.377374       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 01:13:57.377440       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1101 01:13:57.377452       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1101 01:15:57.379578       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1101 01:15:57.379738       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 01:15:57.379813       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1101 01:15:57.379825       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [2112d10fa84b67324dec92087b40581b328307a5cb69e922e1c3a8a63343920c] <==
	* E1101 01:10:18.895523       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1101 01:10:32.876334       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1101 01:10:49.147844       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1101 01:11:04.878426       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1101 01:11:19.399879       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1101 01:11:36.881401       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1101 01:11:49.652096       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1101 01:12:08.884230       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1101 01:12:19.906829       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1101 01:12:40.886220       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1101 01:12:50.159118       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1101 01:13:12.888655       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1101 01:13:20.411249       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1101 01:13:44.891236       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1101 01:13:50.663317       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1101 01:14:16.893366       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1101 01:14:20.915449       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1101 01:14:48.895379       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1101 01:14:51.167562       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1101 01:15:20.897698       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1101 01:15:21.419595       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E1101 01:15:51.671556       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1101 01:15:52.900385       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1101 01:16:21.923610       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1101 01:16:24.902233       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [7cea2a8f7c286807900a09209205034982d97ab11615435f0759431aa7dbb1cf] <==
	* W1101 01:06:18.716601       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1101 01:06:18.760499       1 node.go:135] Successfully retrieved node IP: 192.168.39.90
	I1101 01:06:18.760629       1 server_others.go:149] Using iptables Proxier.
	I1101 01:06:18.761597       1 server.go:529] Version: v1.16.0
	I1101 01:06:18.779304       1 config.go:313] Starting service config controller
	I1101 01:06:18.779369       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1101 01:06:18.779412       1 config.go:131] Starting endpoints config controller
	I1101 01:06:18.779443       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1101 01:06:18.879826       1 shared_informer.go:204] Caches are synced for endpoints config 
	I1101 01:06:18.879936       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [a6dfc40f684bae2ed88ef5e956c1cc1b727a6db7dd14095504543757767d170f] <==
	* I1101 01:05:56.378457       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1101 01:05:56.431332       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1101 01:05:56.436161       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1101 01:05:56.436413       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1101 01:05:56.438293       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1101 01:05:56.438390       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1101 01:05:56.438428       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1101 01:05:56.438460       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1101 01:05:56.439850       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1101 01:05:56.439929       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1101 01:05:56.440104       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1101 01:05:56.440818       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1101 01:05:57.434514       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1101 01:05:57.439382       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1101 01:05:57.441131       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1101 01:05:57.442746       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1101 01:05:57.444399       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1101 01:05:57.447201       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1101 01:05:57.448599       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1101 01:05:57.451916       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1101 01:05:57.453892       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1101 01:05:57.455142       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1101 01:05:57.456440       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1101 01:06:16.593765       1 factory.go:585] pod is already present in the activeQ
	E1101 01:06:16.728533       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-11-01 01:00:27 UTC, ends at Wed 2023-11-01 01:16:39 UTC. --
	Nov 01 01:11:55 old-k8s-version-330042 kubelet[3096]: E1101 01:11:55.535630    3096 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Nov 01 01:11:55 old-k8s-version-330042 kubelet[3096]: E1101 01:11:55.535684    3096 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Nov 01 01:11:55 old-k8s-version-330042 kubelet[3096]: E1101 01:11:55.535713    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Nov 01 01:12:10 old-k8s-version-330042 kubelet[3096]: E1101 01:12:10.523551    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:12:25 old-k8s-version-330042 kubelet[3096]: E1101 01:12:25.525788    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:12:37 old-k8s-version-330042 kubelet[3096]: E1101 01:12:37.523888    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:12:49 old-k8s-version-330042 kubelet[3096]: E1101 01:12:49.523800    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:13:04 old-k8s-version-330042 kubelet[3096]: E1101 01:13:04.524148    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:13:19 old-k8s-version-330042 kubelet[3096]: E1101 01:13:19.523643    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:13:31 old-k8s-version-330042 kubelet[3096]: E1101 01:13:31.523383    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:13:43 old-k8s-version-330042 kubelet[3096]: E1101 01:13:43.523384    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:13:54 old-k8s-version-330042 kubelet[3096]: E1101 01:13:54.523476    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:14:07 old-k8s-version-330042 kubelet[3096]: E1101 01:14:07.523828    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:14:21 old-k8s-version-330042 kubelet[3096]: E1101 01:14:21.523971    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:14:36 old-k8s-version-330042 kubelet[3096]: E1101 01:14:36.523471    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:14:48 old-k8s-version-330042 kubelet[3096]: E1101 01:14:48.523747    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:15:01 old-k8s-version-330042 kubelet[3096]: E1101 01:15:01.523437    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:15:13 old-k8s-version-330042 kubelet[3096]: E1101 01:15:13.523834    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:15:24 old-k8s-version-330042 kubelet[3096]: E1101 01:15:24.523318    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:15:39 old-k8s-version-330042 kubelet[3096]: E1101 01:15:39.523742    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:15:49 old-k8s-version-330042 kubelet[3096]: E1101 01:15:49.626190    3096 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Nov 01 01:15:53 old-k8s-version-330042 kubelet[3096]: E1101 01:15:53.523625    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:16:07 old-k8s-version-330042 kubelet[3096]: E1101 01:16:07.523769    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:16:20 old-k8s-version-330042 kubelet[3096]: E1101 01:16:20.523622    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:16:32 old-k8s-version-330042 kubelet[3096]: E1101 01:16:32.523784    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [47e9d096bab623b41a163040180ac989be703b43ed4158dcada9550cc356baa9] <==
	* I1101 01:06:19.831663       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 01:06:19.848248       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 01:06:19.848338       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1101 01:06:19.856824       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 01:06:19.857621       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a97e94b9-9dff-4bda-b326-60eaa155914e", APIVersion:"v1", ResourceVersion:"422", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-330042_c63c0714-a303-4b29-a0ce-ec388327e4a4 became leader
	I1101 01:06:19.858123       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-330042_c63c0714-a303-4b29-a0ce-ec388327e4a4!
	I1101 01:06:19.959257       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-330042_c63c0714-a303-4b29-a0ce-ec388327e4a4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-330042 -n old-k8s-version-330042
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-330042 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-m5v28
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-330042 describe pod metrics-server-74d5856cc6-m5v28
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-330042 describe pod metrics-server-74d5856cc6-m5v28: exit status 1 (71.811703ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-m5v28" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-330042 describe pod metrics-server-74d5856cc6-m5v28: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (404.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1101 01:14:51.060045   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/custom-flannel-090856/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-754132 -n embed-certs-754132
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-11-01 01:21:26.710039554 +0000 UTC m=+5860.274621550
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-754132 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-754132 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.695µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-754132 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-754132 -n embed-certs-754132
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-754132 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-754132 logs -n 25: (1.335425886s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p flannel-090856 sudo find                            | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |                |                     |                     |
	| ssh     | -p flannel-090856 sudo crio                            | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | config                                                 |                              |         |                |                     |                     |
	| delete  | -p flannel-090856                                      | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	| delete  | -p                                                     | disable-driver-mounts-130996 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | disable-driver-mounts-130996                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:53 UTC |
	|         | default-k8s-diff-port-639310                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-008483             | no-preload-008483            | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC | 01 Nov 23 00:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-008483                                   | no-preload-008483            | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-754132            | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC | 01 Nov 23 00:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-754132                                  | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-330042        | old-k8s-version-330042       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC | 01 Nov 23 00:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-330042                              | old-k8s-version-330042       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-639310  | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:53 UTC | 01 Nov 23 00:53 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:53 UTC |                     |
	|         | default-k8s-diff-port-639310                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-008483                  | no-preload-008483            | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-754132                 | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-008483                                   | no-preload-008483            | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC | 01 Nov 23 01:06 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| start   | -p embed-certs-754132                                  | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC | 01 Nov 23 01:05 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-330042             | old-k8s-version-330042       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-330042                              | old-k8s-version-330042       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC | 01 Nov 23 01:07 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-639310       | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:56 UTC | 01 Nov 23 01:06 UTC |
	|         | default-k8s-diff-port-639310                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| delete  | -p old-k8s-version-330042                              | old-k8s-version-330042       | jenkins | v1.32.0-beta.0 | 01 Nov 23 01:20 UTC | 01 Nov 23 01:20 UTC |
	| start   | -p newest-cni-816754 --memory=2200 --alsologtostderr   | newest-cni-816754            | jenkins | v1.32.0-beta.0 | 01 Nov 23 01:20 UTC | 01 Nov 23 01:21 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| delete  | -p no-preload-008483                                   | no-preload-008483            | jenkins | v1.32.0-beta.0 | 01 Nov 23 01:21 UTC | 01 Nov 23 01:21 UTC |
	| addons  | enable metrics-server -p newest-cni-816754             | newest-cni-816754            | jenkins | v1.32.0-beta.0 | 01 Nov 23 01:21 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/01 01:20:26
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 01:20:26.901577   64625 out.go:296] Setting OutFile to fd 1 ...
	I1101 01:20:26.901877   64625 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 01:20:26.901887   64625 out.go:309] Setting ErrFile to fd 2...
	I1101 01:20:26.901895   64625 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 01:20:26.902108   64625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7305/.minikube/bin
	I1101 01:20:26.902738   64625 out.go:303] Setting JSON to false
	I1101 01:20:26.903795   64625 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7372,"bootTime":1698794255,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 01:20:26.903867   64625 start.go:138] virtualization: kvm guest
	I1101 01:20:26.906222   64625 out.go:177] * [newest-cni-816754] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1101 01:20:26.907419   64625 out.go:177]   - MINIKUBE_LOCATION=17486
	I1101 01:20:26.907510   64625 notify.go:220] Checking for updates...
	I1101 01:20:26.908569   64625 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 01:20:26.909780   64625 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 01:20:26.911040   64625 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7305/.minikube
	I1101 01:20:26.912350   64625 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 01:20:26.913709   64625 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 01:20:26.915573   64625 config.go:182] Loaded profile config "default-k8s-diff-port-639310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:20:26.915683   64625 config.go:182] Loaded profile config "embed-certs-754132": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:20:26.915774   64625 config.go:182] Loaded profile config "no-preload-008483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:20:26.915865   64625 driver.go:378] Setting default libvirt URI to qemu:///system
	I1101 01:20:26.957402   64625 out.go:177] * Using the kvm2 driver based on user configuration
	I1101 01:20:26.959160   64625 start.go:298] selected driver: kvm2
	I1101 01:20:26.959182   64625 start.go:902] validating driver "kvm2" against <nil>
	I1101 01:20:26.959194   64625 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 01:20:26.959984   64625 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 01:20:26.960073   64625 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17486-7305/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1101 01:20:26.976448   64625 install.go:137] /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1101 01:20:26.976536   64625 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	W1101 01:20:26.976585   64625 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1101 01:20:26.976861   64625 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 01:20:26.976937   64625 cni.go:84] Creating CNI manager for ""
	I1101 01:20:26.976955   64625 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:20:26.976973   64625 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1101 01:20:26.976985   64625 start_flags.go:323] config:
	{Name:newest-cni-816754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-816754 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 01:20:26.977171   64625 iso.go:125] acquiring lock: {Name:mk1f649ca0b7c1ae293cd66cb85f9eeda028b20b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 01:20:26.979480   64625 out.go:177] * Starting control plane node newest-cni-816754 in cluster newest-cni-816754
	I1101 01:20:26.981114   64625 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 01:20:26.981177   64625 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1101 01:20:26.981212   64625 cache.go:56] Caching tarball of preloaded images
	I1101 01:20:26.981372   64625 preload.go:174] Found /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 01:20:26.981391   64625 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1101 01:20:26.981513   64625 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/config.json ...
	I1101 01:20:26.981539   64625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/config.json: {Name:mk93f245040cb932920ceaccd9b3116731eb7701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:20:26.981715   64625 start.go:365] acquiring machines lock for newest-cni-816754: {Name:mk7aad88408c319111b9be8e59d9593a9e88374b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 01:20:26.981773   64625 start.go:369] acquired machines lock for "newest-cni-816754" in 41.542µs
	I1101 01:20:26.981798   64625 start.go:93] Provisioning new machine with config: &{Name:newest-cni-816754 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-816754 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenki
ns:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 01:20:26.981922   64625 start.go:125] createHost starting for "" (driver="kvm2")
	I1101 01:20:26.984486   64625 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1101 01:20:26.984675   64625 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:20:26.984734   64625 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:20:26.999310   64625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40233
	I1101 01:20:26.999922   64625 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:20:27.000559   64625 main.go:141] libmachine: Using API Version  1
	I1101 01:20:27.000633   64625 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:20:27.001131   64625 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:20:27.001331   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetMachineName
	I1101 01:20:27.001486   64625 main.go:141] libmachine: (newest-cni-816754) Calling .DriverName
	I1101 01:20:27.001663   64625 start.go:159] libmachine.API.Create for "newest-cni-816754" (driver="kvm2")
	I1101 01:20:27.001704   64625 client.go:168] LocalClient.Create starting
	I1101 01:20:27.001749   64625 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem
	I1101 01:20:27.001823   64625 main.go:141] libmachine: Decoding PEM data...
	I1101 01:20:27.001848   64625 main.go:141] libmachine: Parsing certificate...
	I1101 01:20:27.001921   64625 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem
	I1101 01:20:27.001951   64625 main.go:141] libmachine: Decoding PEM data...
	I1101 01:20:27.001968   64625 main.go:141] libmachine: Parsing certificate...
	I1101 01:20:27.001996   64625 main.go:141] libmachine: Running pre-create checks...
	I1101 01:20:27.002010   64625 main.go:141] libmachine: (newest-cni-816754) Calling .PreCreateCheck
	I1101 01:20:27.002505   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetConfigRaw
	I1101 01:20:27.003066   64625 main.go:141] libmachine: Creating machine...
	I1101 01:20:27.003087   64625 main.go:141] libmachine: (newest-cni-816754) Calling .Create
	I1101 01:20:27.003248   64625 main.go:141] libmachine: (newest-cni-816754) Creating KVM machine...
	I1101 01:20:27.005057   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found existing default KVM network
	I1101 01:20:27.006772   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:27.006629   64648 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000112c40}
	I1101 01:20:27.012945   64625 main.go:141] libmachine: (newest-cni-816754) DBG | trying to create private KVM network mk-newest-cni-816754 192.168.39.0/24...
	I1101 01:20:27.098635   64625 main.go:141] libmachine: (newest-cni-816754) DBG | private KVM network mk-newest-cni-816754 192.168.39.0/24 created
	I1101 01:20:27.098677   64625 main.go:141] libmachine: (newest-cni-816754) Setting up store path in /home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754 ...
	I1101 01:20:27.098714   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:27.098609   64648 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17486-7305/.minikube
	I1101 01:20:27.098753   64625 main.go:141] libmachine: (newest-cni-816754) Building disk image from file:///home/jenkins/minikube-integration/17486-7305/.minikube/cache/iso/amd64/minikube-v1.32.0-1698773592-17486-amd64.iso
	I1101 01:20:27.098787   64625 main.go:141] libmachine: (newest-cni-816754) Downloading /home/jenkins/minikube-integration/17486-7305/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17486-7305/.minikube/cache/iso/amd64/minikube-v1.32.0-1698773592-17486-amd64.iso...
	I1101 01:20:27.330302   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:27.330064   64648 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754/id_rsa...
	I1101 01:20:27.606617   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:27.606462   64648 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754/newest-cni-816754.rawdisk...
	I1101 01:20:27.606653   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Writing magic tar header
	I1101 01:20:27.606677   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Writing SSH key tar header
	I1101 01:20:27.606784   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:27.606706   64648 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754 ...
	I1101 01:20:27.606843   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754
	I1101 01:20:27.606862   64625 main.go:141] libmachine: (newest-cni-816754) Setting executable bit set on /home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754 (perms=drwx------)
	I1101 01:20:27.606872   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17486-7305/.minikube/machines
	I1101 01:20:27.606888   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17486-7305/.minikube
	I1101 01:20:27.606899   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17486-7305
	I1101 01:20:27.606926   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1101 01:20:27.606938   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Checking permissions on dir: /home/jenkins
	I1101 01:20:27.606955   64625 main.go:141] libmachine: (newest-cni-816754) Setting executable bit set on /home/jenkins/minikube-integration/17486-7305/.minikube/machines (perms=drwxr-xr-x)
	I1101 01:20:27.606966   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Checking permissions on dir: /home
	I1101 01:20:27.606983   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Skipping /home - not owner
	I1101 01:20:27.606996   64625 main.go:141] libmachine: (newest-cni-816754) Setting executable bit set on /home/jenkins/minikube-integration/17486-7305/.minikube (perms=drwxr-xr-x)
	I1101 01:20:27.607004   64625 main.go:141] libmachine: (newest-cni-816754) Setting executable bit set on /home/jenkins/minikube-integration/17486-7305 (perms=drwxrwxr-x)
	I1101 01:20:27.607017   64625 main.go:141] libmachine: (newest-cni-816754) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1101 01:20:27.607032   64625 main.go:141] libmachine: (newest-cni-816754) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1101 01:20:27.607050   64625 main.go:141] libmachine: (newest-cni-816754) Creating domain...
	I1101 01:20:27.608546   64625 main.go:141] libmachine: (newest-cni-816754) define libvirt domain using xml: 
	I1101 01:20:27.608579   64625 main.go:141] libmachine: (newest-cni-816754) <domain type='kvm'>
	I1101 01:20:27.608590   64625 main.go:141] libmachine: (newest-cni-816754)   <name>newest-cni-816754</name>
	I1101 01:20:27.608596   64625 main.go:141] libmachine: (newest-cni-816754)   <memory unit='MiB'>2200</memory>
	I1101 01:20:27.608602   64625 main.go:141] libmachine: (newest-cni-816754)   <vcpu>2</vcpu>
	I1101 01:20:27.608610   64625 main.go:141] libmachine: (newest-cni-816754)   <features>
	I1101 01:20:27.608619   64625 main.go:141] libmachine: (newest-cni-816754)     <acpi/>
	I1101 01:20:27.608632   64625 main.go:141] libmachine: (newest-cni-816754)     <apic/>
	I1101 01:20:27.608644   64625 main.go:141] libmachine: (newest-cni-816754)     <pae/>
	I1101 01:20:27.608655   64625 main.go:141] libmachine: (newest-cni-816754)     
	I1101 01:20:27.608668   64625 main.go:141] libmachine: (newest-cni-816754)   </features>
	I1101 01:20:27.608678   64625 main.go:141] libmachine: (newest-cni-816754)   <cpu mode='host-passthrough'>
	I1101 01:20:27.608705   64625 main.go:141] libmachine: (newest-cni-816754)   
	I1101 01:20:27.608725   64625 main.go:141] libmachine: (newest-cni-816754)   </cpu>
	I1101 01:20:27.608732   64625 main.go:141] libmachine: (newest-cni-816754)   <os>
	I1101 01:20:27.608751   64625 main.go:141] libmachine: (newest-cni-816754)     <type>hvm</type>
	I1101 01:20:27.608760   64625 main.go:141] libmachine: (newest-cni-816754)     <boot dev='cdrom'/>
	I1101 01:20:27.608766   64625 main.go:141] libmachine: (newest-cni-816754)     <boot dev='hd'/>
	I1101 01:20:27.608775   64625 main.go:141] libmachine: (newest-cni-816754)     <bootmenu enable='no'/>
	I1101 01:20:27.608780   64625 main.go:141] libmachine: (newest-cni-816754)   </os>
	I1101 01:20:27.608786   64625 main.go:141] libmachine: (newest-cni-816754)   <devices>
	I1101 01:20:27.608793   64625 main.go:141] libmachine: (newest-cni-816754)     <disk type='file' device='cdrom'>
	I1101 01:20:27.608805   64625 main.go:141] libmachine: (newest-cni-816754)       <source file='/home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754/boot2docker.iso'/>
	I1101 01:20:27.608817   64625 main.go:141] libmachine: (newest-cni-816754)       <target dev='hdc' bus='scsi'/>
	I1101 01:20:27.608828   64625 main.go:141] libmachine: (newest-cni-816754)       <readonly/>
	I1101 01:20:27.608838   64625 main.go:141] libmachine: (newest-cni-816754)     </disk>
	I1101 01:20:27.608865   64625 main.go:141] libmachine: (newest-cni-816754)     <disk type='file' device='disk'>
	I1101 01:20:27.608886   64625 main.go:141] libmachine: (newest-cni-816754)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1101 01:20:27.608904   64625 main.go:141] libmachine: (newest-cni-816754)       <source file='/home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754/newest-cni-816754.rawdisk'/>
	I1101 01:20:27.608918   64625 main.go:141] libmachine: (newest-cni-816754)       <target dev='hda' bus='virtio'/>
	I1101 01:20:27.608932   64625 main.go:141] libmachine: (newest-cni-816754)     </disk>
	I1101 01:20:27.608945   64625 main.go:141] libmachine: (newest-cni-816754)     <interface type='network'>
	I1101 01:20:27.608968   64625 main.go:141] libmachine: (newest-cni-816754)       <source network='mk-newest-cni-816754'/>
	I1101 01:20:27.608981   64625 main.go:141] libmachine: (newest-cni-816754)       <model type='virtio'/>
	I1101 01:20:27.608997   64625 main.go:141] libmachine: (newest-cni-816754)     </interface>
	I1101 01:20:27.609012   64625 main.go:141] libmachine: (newest-cni-816754)     <interface type='network'>
	I1101 01:20:27.609025   64625 main.go:141] libmachine: (newest-cni-816754)       <source network='default'/>
	I1101 01:20:27.609039   64625 main.go:141] libmachine: (newest-cni-816754)       <model type='virtio'/>
	I1101 01:20:27.609051   64625 main.go:141] libmachine: (newest-cni-816754)     </interface>
	I1101 01:20:27.609064   64625 main.go:141] libmachine: (newest-cni-816754)     <serial type='pty'>
	I1101 01:20:27.609073   64625 main.go:141] libmachine: (newest-cni-816754)       <target port='0'/>
	I1101 01:20:27.609081   64625 main.go:141] libmachine: (newest-cni-816754)     </serial>
	I1101 01:20:27.609098   64625 main.go:141] libmachine: (newest-cni-816754)     <console type='pty'>
	I1101 01:20:27.609113   64625 main.go:141] libmachine: (newest-cni-816754)       <target type='serial' port='0'/>
	I1101 01:20:27.609128   64625 main.go:141] libmachine: (newest-cni-816754)     </console>
	I1101 01:20:27.609142   64625 main.go:141] libmachine: (newest-cni-816754)     <rng model='virtio'>
	I1101 01:20:27.609153   64625 main.go:141] libmachine: (newest-cni-816754)       <backend model='random'>/dev/random</backend>
	I1101 01:20:27.609165   64625 main.go:141] libmachine: (newest-cni-816754)     </rng>
	I1101 01:20:27.609180   64625 main.go:141] libmachine: (newest-cni-816754)     
	I1101 01:20:27.609190   64625 main.go:141] libmachine: (newest-cni-816754)     
	I1101 01:20:27.609202   64625 main.go:141] libmachine: (newest-cni-816754)   </devices>
	I1101 01:20:27.609216   64625 main.go:141] libmachine: (newest-cni-816754) </domain>
	I1101 01:20:27.609227   64625 main.go:141] libmachine: (newest-cni-816754) 
	I1101 01:20:27.613657   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:b4:0b:bf in network default
	I1101 01:20:27.614304   64625 main.go:141] libmachine: (newest-cni-816754) Ensuring networks are active...
	I1101 01:20:27.614325   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:27.615098   64625 main.go:141] libmachine: (newest-cni-816754) Ensuring network default is active
	I1101 01:20:27.615425   64625 main.go:141] libmachine: (newest-cni-816754) Ensuring network mk-newest-cni-816754 is active
	I1101 01:20:27.615881   64625 main.go:141] libmachine: (newest-cni-816754) Getting domain xml...
	I1101 01:20:27.616713   64625 main.go:141] libmachine: (newest-cni-816754) Creating domain...
	I1101 01:20:28.945794   64625 main.go:141] libmachine: (newest-cni-816754) Waiting to get IP...
	I1101 01:20:28.946695   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:28.947110   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:28.947197   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:28.947105   64648 retry.go:31] will retry after 218.225741ms: waiting for machine to come up
	I1101 01:20:29.166699   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:29.167318   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:29.167352   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:29.167256   64648 retry.go:31] will retry after 390.036378ms: waiting for machine to come up
	I1101 01:20:29.558855   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:29.559354   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:29.559389   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:29.559285   64648 retry.go:31] will retry after 410.30945ms: waiting for machine to come up
	I1101 01:20:29.970656   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:29.971063   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:29.971101   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:29.971014   64648 retry.go:31] will retry after 545.455542ms: waiting for machine to come up
	I1101 01:20:30.517668   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:30.518337   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:30.518379   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:30.518285   64648 retry.go:31] will retry after 562.086808ms: waiting for machine to come up
	I1101 01:20:31.081578   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:31.082157   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:31.082205   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:31.082083   64648 retry.go:31] will retry after 744.834019ms: waiting for machine to come up
	I1101 01:20:31.829035   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:31.829593   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:31.829623   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:31.829545   64648 retry.go:31] will retry after 1.124156549s: waiting for machine to come up
	I1101 01:20:32.955229   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:32.955754   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:32.955776   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:32.955707   64648 retry.go:31] will retry after 945.262883ms: waiting for machine to come up
	I1101 01:20:33.903162   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:33.903604   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:33.903627   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:33.903574   64648 retry.go:31] will retry after 1.342633534s: waiting for machine to come up
	I1101 01:20:35.247780   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:35.248333   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:35.248370   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:35.248271   64648 retry.go:31] will retry after 1.717433966s: waiting for machine to come up
	I1101 01:20:36.967748   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:36.968301   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:36.968331   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:36.968243   64648 retry.go:31] will retry after 2.125257088s: waiting for machine to come up
	I1101 01:20:39.096241   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:39.096903   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:39.096930   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:39.096845   64648 retry.go:31] will retry after 3.120284679s: waiting for machine to come up
	I1101 01:20:42.218526   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:42.219010   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:42.219035   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:42.218966   64648 retry.go:31] will retry after 3.400004837s: waiting for machine to come up
	I1101 01:20:45.621833   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:45.622314   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:45.622342   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:45.622255   64648 retry.go:31] will retry after 4.340884931s: waiting for machine to come up
	I1101 01:20:49.966397   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:49.966885   64625 main.go:141] libmachine: (newest-cni-816754) Found IP for machine: 192.168.39.148
	I1101 01:20:49.966944   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has current primary IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:49.966966   64625 main.go:141] libmachine: (newest-cni-816754) Reserving static IP address...
	I1101 01:20:49.967354   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find host DHCP lease matching {name: "newest-cni-816754", mac: "52:54:00:e9:10:53", ip: "192.168.39.148"} in network mk-newest-cni-816754
	I1101 01:20:50.049507   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Getting to WaitForSSH function...
	I1101 01:20:50.049555   64625 main.go:141] libmachine: (newest-cni-816754) Reserved static IP address: 192.168.39.148
	I1101 01:20:50.049575   64625 main.go:141] libmachine: (newest-cni-816754) Waiting for SSH to be available...
	I1101 01:20:50.052593   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.053018   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:50.053066   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.053156   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Using SSH client type: external
	I1101 01:20:50.053178   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754/id_rsa (-rw-------)
	I1101 01:20:50.053215   64625 main.go:141] libmachine: (newest-cni-816754) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.148 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1101 01:20:50.053237   64625 main.go:141] libmachine: (newest-cni-816754) DBG | About to run SSH command:
	I1101 01:20:50.053250   64625 main.go:141] libmachine: (newest-cni-816754) DBG | exit 0
	I1101 01:20:50.143882   64625 main.go:141] libmachine: (newest-cni-816754) DBG | SSH cmd err, output: <nil>: 
	I1101 01:20:50.144170   64625 main.go:141] libmachine: (newest-cni-816754) KVM machine creation complete!
	I1101 01:20:50.144668   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetConfigRaw
	I1101 01:20:50.145249   64625 main.go:141] libmachine: (newest-cni-816754) Calling .DriverName
	I1101 01:20:50.145481   64625 main.go:141] libmachine: (newest-cni-816754) Calling .DriverName
	I1101 01:20:50.145669   64625 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1101 01:20:50.145685   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetState
	I1101 01:20:50.147056   64625 main.go:141] libmachine: Detecting operating system of created instance...
	I1101 01:20:50.147070   64625 main.go:141] libmachine: Waiting for SSH to be available...
	I1101 01:20:50.147077   64625 main.go:141] libmachine: Getting to WaitForSSH function...
	I1101 01:20:50.147083   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHHostname
	I1101 01:20:50.149699   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.150244   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:50.150269   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.150408   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHPort
	I1101 01:20:50.150591   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:50.150729   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:50.150859   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHUsername
	I1101 01:20:50.151028   64625 main.go:141] libmachine: Using SSH client type: native
	I1101 01:20:50.151445   64625 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I1101 01:20:50.151466   64625 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1101 01:20:50.267248   64625 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 01:20:50.267271   64625 main.go:141] libmachine: Detecting the provisioner...
	I1101 01:20:50.267280   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHHostname
	I1101 01:20:50.270067   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.270474   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:50.270509   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.270587   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHPort
	I1101 01:20:50.270746   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:50.270937   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:50.271089   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHUsername
	I1101 01:20:50.271265   64625 main.go:141] libmachine: Using SSH client type: native
	I1101 01:20:50.271607   64625 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I1101 01:20:50.271624   64625 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1101 01:20:50.388826   64625 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g0cee705-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1101 01:20:50.388908   64625 main.go:141] libmachine: found compatible host: buildroot
	I1101 01:20:50.388923   64625 main.go:141] libmachine: Provisioning with buildroot...
	I1101 01:20:50.388932   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetMachineName
	I1101 01:20:50.389214   64625 buildroot.go:166] provisioning hostname "newest-cni-816754"
	I1101 01:20:50.389241   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetMachineName
	I1101 01:20:50.389409   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHHostname
	I1101 01:20:50.392105   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.392490   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:50.392522   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.392627   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHPort
	I1101 01:20:50.392797   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:50.392983   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:50.393154   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHUsername
	I1101 01:20:50.393340   64625 main.go:141] libmachine: Using SSH client type: native
	I1101 01:20:50.393753   64625 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I1101 01:20:50.393771   64625 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-816754 && echo "newest-cni-816754" | sudo tee /etc/hostname
	I1101 01:20:50.524496   64625 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-816754
	
	I1101 01:20:50.524530   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHHostname
	I1101 01:20:50.527592   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.528072   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:50.528121   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.528367   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHPort
	I1101 01:20:50.528602   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:50.528794   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:50.529017   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHUsername
	I1101 01:20:50.529224   64625 main.go:141] libmachine: Using SSH client type: native
	I1101 01:20:50.529620   64625 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I1101 01:20:50.529646   64625 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-816754' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-816754/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-816754' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 01:20:50.657526   64625 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 01:20:50.657563   64625 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7305/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7305/.minikube}
	I1101 01:20:50.657588   64625 buildroot.go:174] setting up certificates
	I1101 01:20:50.657599   64625 provision.go:83] configureAuth start
	I1101 01:20:50.657618   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetMachineName
	I1101 01:20:50.657946   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetIP
	I1101 01:20:50.660675   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.660941   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:50.660970   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.661118   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHHostname
	I1101 01:20:50.663458   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.663801   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:50.663833   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.664020   64625 provision.go:138] copyHostCerts
	I1101 01:20:50.664082   64625 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem, removing ...
	I1101 01:20:50.664104   64625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 01:20:50.664183   64625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem (1078 bytes)
	I1101 01:20:50.664312   64625 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem, removing ...
	I1101 01:20:50.664323   64625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 01:20:50.664359   64625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem (1123 bytes)
	I1101 01:20:50.664466   64625 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem, removing ...
	I1101 01:20:50.664480   64625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 01:20:50.664525   64625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem (1675 bytes)
	I1101 01:20:50.664577   64625 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem org=jenkins.newest-cni-816754 san=[192.168.39.148 192.168.39.148 localhost 127.0.0.1 minikube newest-cni-816754]
	I1101 01:20:51.005619   64625 provision.go:172] copyRemoteCerts
	I1101 01:20:51.005678   64625 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 01:20:51.005708   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHHostname
	I1101 01:20:51.008884   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.009323   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:51.009359   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.009521   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHPort
	I1101 01:20:51.009749   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:51.009919   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHUsername
	I1101 01:20:51.010066   64625 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754/id_rsa Username:docker}
	I1101 01:20:51.101132   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 01:20:51.125444   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 01:20:51.148653   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1101 01:20:51.172340   64625 provision.go:86] duration metric: configureAuth took 514.718541ms
	I1101 01:20:51.172363   64625 buildroot.go:189] setting minikube options for container-runtime
	I1101 01:20:51.172712   64625 config.go:182] Loaded profile config "newest-cni-816754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:20:51.172819   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHHostname
	I1101 01:20:51.176208   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.176684   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:51.176721   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.176935   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHPort
	I1101 01:20:51.177176   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:51.177359   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:51.177516   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHUsername
	I1101 01:20:51.177725   64625 main.go:141] libmachine: Using SSH client type: native
	I1101 01:20:51.178105   64625 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I1101 01:20:51.178127   64625 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 01:20:51.483523   64625 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 01:20:51.483557   64625 main.go:141] libmachine: Checking connection to Docker...
	I1101 01:20:51.483589   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetURL
	I1101 01:20:51.484996   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Using libvirt version 6000000
	I1101 01:20:51.487079   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.487445   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:51.487479   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.487667   64625 main.go:141] libmachine: Docker is up and running!
	I1101 01:20:51.487682   64625 main.go:141] libmachine: Reticulating splines...
	I1101 01:20:51.487690   64625 client.go:171] LocalClient.Create took 24.485974012s
	I1101 01:20:51.487718   64625 start.go:167] duration metric: libmachine.API.Create for "newest-cni-816754" took 24.486056341s
	I1101 01:20:51.487735   64625 start.go:300] post-start starting for "newest-cni-816754" (driver="kvm2")
	I1101 01:20:51.487751   64625 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 01:20:51.487775   64625 main.go:141] libmachine: (newest-cni-816754) Calling .DriverName
	I1101 01:20:51.488081   64625 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 01:20:51.488105   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHHostname
	I1101 01:20:51.490270   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.490622   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:51.490644   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.490743   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHPort
	I1101 01:20:51.490946   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:51.491112   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHUsername
	I1101 01:20:51.491250   64625 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754/id_rsa Username:docker}
	I1101 01:20:51.577564   64625 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 01:20:51.582421   64625 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 01:20:51.582451   64625 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/addons for local assets ...
	I1101 01:20:51.582522   64625 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/files for local assets ...
	I1101 01:20:51.582624   64625 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> 145042.pem in /etc/ssl/certs
	I1101 01:20:51.582715   64625 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 01:20:51.591399   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:20:51.614218   64625 start.go:303] post-start completed in 126.467274ms
	I1101 01:20:51.614257   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetConfigRaw
	I1101 01:20:51.614794   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetIP
	I1101 01:20:51.617335   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.617843   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:51.617874   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.618197   64625 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/config.json ...
	I1101 01:20:51.618407   64625 start.go:128] duration metric: createHost completed in 24.636474986s
	I1101 01:20:51.618431   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHHostname
	I1101 01:20:51.621008   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.621378   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:51.621410   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.621543   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHPort
	I1101 01:20:51.621766   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:51.621964   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:51.622142   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHUsername
	I1101 01:20:51.622347   64625 main.go:141] libmachine: Using SSH client type: native
	I1101 01:20:51.622677   64625 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I1101 01:20:51.622691   64625 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1101 01:20:51.740752   64625 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698801651.712516909
	
	I1101 01:20:51.740784   64625 fix.go:206] guest clock: 1698801651.712516909
	I1101 01:20:51.740793   64625 fix.go:219] Guest: 2023-11-01 01:20:51.712516909 +0000 UTC Remote: 2023-11-01 01:20:51.618418585 +0000 UTC m=+24.769563112 (delta=94.098324ms)
	I1101 01:20:51.740821   64625 fix.go:190] guest clock delta is within tolerance: 94.098324ms
	I1101 01:20:51.740830   64625 start.go:83] releasing machines lock for "newest-cni-816754", held for 24.759043949s
	I1101 01:20:51.740859   64625 main.go:141] libmachine: (newest-cni-816754) Calling .DriverName
	I1101 01:20:51.741149   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetIP
	I1101 01:20:51.743857   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.744261   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:51.744295   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.744431   64625 main.go:141] libmachine: (newest-cni-816754) Calling .DriverName
	I1101 01:20:51.744972   64625 main.go:141] libmachine: (newest-cni-816754) Calling .DriverName
	I1101 01:20:51.745171   64625 main.go:141] libmachine: (newest-cni-816754) Calling .DriverName
	I1101 01:20:51.745262   64625 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 01:20:51.745342   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHHostname
	I1101 01:20:51.745424   64625 ssh_runner.go:195] Run: cat /version.json
	I1101 01:20:51.745451   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHHostname
	I1101 01:20:51.748196   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.748256   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.748573   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:51.748610   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.748646   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:51.748669   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.748736   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHPort
	I1101 01:20:51.748834   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHPort
	I1101 01:20:51.748922   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:51.748981   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:51.749071   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHUsername
	I1101 01:20:51.749139   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHUsername
	I1101 01:20:51.749206   64625 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754/id_rsa Username:docker}
	I1101 01:20:51.749252   64625 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754/id_rsa Username:docker}
	I1101 01:20:51.867866   64625 ssh_runner.go:195] Run: systemctl --version
	I1101 01:20:51.873719   64625 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 01:20:52.035278   64625 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 01:20:52.041189   64625 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 01:20:52.041272   64625 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 01:20:52.056896   64625 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 01:20:52.056923   64625 start.go:472] detecting cgroup driver to use...
	I1101 01:20:52.056988   64625 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 01:20:52.070654   64625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 01:20:52.083147   64625 docker.go:204] disabling cri-docker service (if available) ...
	I1101 01:20:52.083220   64625 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 01:20:52.095798   64625 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 01:20:52.108444   64625 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 01:20:52.224741   64625 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 01:20:52.347981   64625 docker.go:220] disabling docker service ...
	I1101 01:20:52.348055   64625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 01:20:52.361496   64625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 01:20:52.373422   64625 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 01:20:52.491564   64625 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 01:20:52.602648   64625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 01:20:52.614781   64625 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 01:20:52.632225   64625 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 01:20:52.632289   64625 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:20:52.642604   64625 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 01:20:52.642661   64625 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:20:52.652069   64625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:20:52.661998   64625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:20:52.671552   64625 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 01:20:52.682867   64625 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 01:20:52.691926   64625 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 01:20:52.692008   64625 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 01:20:52.704079   64625 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 01:20:52.713491   64625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 01:20:52.839278   64625 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 01:20:53.013907   64625 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 01:20:53.013976   64625 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 01:20:53.019384   64625 start.go:540] Will wait 60s for crictl version
	I1101 01:20:53.019445   64625 ssh_runner.go:195] Run: which crictl
	I1101 01:20:53.023197   64625 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 01:20:53.061965   64625 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1101 01:20:53.062085   64625 ssh_runner.go:195] Run: crio --version
	I1101 01:20:53.108002   64625 ssh_runner.go:195] Run: crio --version
	I1101 01:20:53.158652   64625 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1101 01:20:53.160356   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetIP
	I1101 01:20:53.163312   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:53.163744   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:53.163790   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:53.164054   64625 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1101 01:20:53.168313   64625 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:20:53.181782   64625 localpath.go:92] copying /home/jenkins/minikube-integration/17486-7305/.minikube/client.crt -> /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/client.crt
	I1101 01:20:53.181942   64625 localpath.go:117] copying /home/jenkins/minikube-integration/17486-7305/.minikube/client.key -> /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/client.key
	I1101 01:20:53.184256   64625 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1101 01:20:53.185935   64625 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 01:20:53.186019   64625 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:20:53.221394   64625 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1101 01:20:53.221464   64625 ssh_runner.go:195] Run: which lz4
	I1101 01:20:53.225430   64625 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1101 01:20:53.229572   64625 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 01:20:53.229619   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1101 01:20:55.078513   64625 crio.go:444] Took 1.853113 seconds to copy over tarball
	I1101 01:20:55.078590   64625 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 01:20:58.039163   64625 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.960551837s)
	I1101 01:20:58.039190   64625 crio.go:451] Took 2.960647 seconds to extract the tarball
	I1101 01:20:58.039201   64625 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 01:20:58.082032   64625 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:20:58.154466   64625 crio.go:496] all images are preloaded for cri-o runtime.
	I1101 01:20:58.154490   64625 cache_images.go:84] Images are preloaded, skipping loading
	I1101 01:20:58.154555   64625 ssh_runner.go:195] Run: crio config
	I1101 01:20:58.229389   64625 cni.go:84] Creating CNI manager for ""
	I1101 01:20:58.229423   64625 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:20:58.229447   64625 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I1101 01:20:58.229485   64625 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.148 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-816754 NodeName:newest-cni-816754 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.148"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.39.148 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 01:20:58.229684   64625 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.148
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-816754"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.148
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.148"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 01:20:58.229811   64625 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-816754 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.148
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:newest-cni-816754 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 01:20:58.229888   64625 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 01:20:58.242338   64625 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 01:20:58.242413   64625 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 01:20:58.252625   64625 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (414 bytes)
	I1101 01:20:58.271959   64625 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 01:20:58.290584   64625 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1101 01:20:58.309005   64625 ssh_runner.go:195] Run: grep 192.168.39.148	control-plane.minikube.internal$ /etc/hosts
	I1101 01:20:58.313280   64625 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.148	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:20:58.325979   64625 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754 for IP: 192.168.39.148
	I1101 01:20:58.326024   64625 certs.go:190] acquiring lock for shared ca certs: {Name:mk6185b3938c4b51e80f57b7c81926c81b632b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:20:58.326204   64625 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key
	I1101 01:20:58.326246   64625 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key
	I1101 01:20:58.326329   64625 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/client.key
	I1101 01:20:58.326352   64625 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/apiserver.key.b8daa033
	I1101 01:20:58.326362   64625 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/apiserver.crt.b8daa033 with IP's: [192.168.39.148 10.96.0.1 127.0.0.1 10.0.0.1]
	I1101 01:20:58.427110   64625 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/apiserver.crt.b8daa033 ...
	I1101 01:20:58.427140   64625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/apiserver.crt.b8daa033: {Name:mk3f8c141290c3a65392487e79efcc8078b29342 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:20:58.427342   64625 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/apiserver.key.b8daa033 ...
	I1101 01:20:58.427358   64625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/apiserver.key.b8daa033: {Name:mk51785e712809e4c053079f222fcaf26d1cb6b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:20:58.427483   64625 certs.go:337] copying /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/apiserver.crt.b8daa033 -> /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/apiserver.crt
	I1101 01:20:58.427575   64625 certs.go:341] copying /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/apiserver.key.b8daa033 -> /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/apiserver.key
	I1101 01:20:58.427646   64625 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/proxy-client.key
	I1101 01:20:58.427668   64625 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/proxy-client.crt with IP's: []
	I1101 01:20:58.706887   64625 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/proxy-client.crt ...
	I1101 01:20:58.706917   64625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/proxy-client.crt: {Name:mkba323c47c990603b5078f2d8326413583ed649 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:20:58.707094   64625 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/proxy-client.key ...
	I1101 01:20:58.707115   64625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/proxy-client.key: {Name:mk640a807fa21c954ca16b3fd0849059bca2a284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:20:58.707364   64625 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem (1338 bytes)
	W1101 01:20:58.707404   64625 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504_empty.pem, impossibly tiny 0 bytes
	I1101 01:20:58.707415   64625 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 01:20:58.707435   64625 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem (1078 bytes)
	I1101 01:20:58.707464   64625 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem (1123 bytes)
	I1101 01:20:58.707485   64625 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem (1675 bytes)
	I1101 01:20:58.707533   64625 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:20:58.708143   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 01:20:58.734111   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 01:20:58.762510   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 01:20:58.788616   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 01:20:58.813676   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 01:20:58.839628   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 01:20:58.865730   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 01:20:58.892226   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 01:20:58.917779   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem --> /usr/share/ca-certificates/14504.pem (1338 bytes)
	I1101 01:20:58.942571   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /usr/share/ca-certificates/145042.pem (1708 bytes)
	I1101 01:20:58.965958   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 01:20:58.990159   64625 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 01:20:59.008835   64625 ssh_runner.go:195] Run: openssl version
	I1101 01:20:59.015199   64625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14504.pem && ln -fs /usr/share/ca-certificates/14504.pem /etc/ssl/certs/14504.pem"
	I1101 01:20:59.025730   64625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14504.pem
	I1101 01:20:59.030514   64625 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:54 /usr/share/ca-certificates/14504.pem
	I1101 01:20:59.030585   64625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14504.pem
	I1101 01:20:59.036535   64625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14504.pem /etc/ssl/certs/51391683.0"
	I1101 01:20:59.047853   64625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145042.pem && ln -fs /usr/share/ca-certificates/145042.pem /etc/ssl/certs/145042.pem"
	I1101 01:20:59.058620   64625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145042.pem
	I1101 01:20:59.063296   64625 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:54 /usr/share/ca-certificates/145042.pem
	I1101 01:20:59.063369   64625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145042.pem
	I1101 01:20:59.069054   64625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145042.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 01:20:59.078653   64625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 01:20:59.089081   64625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:20:59.094030   64625 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:20:59.094097   64625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:20:59.099890   64625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 01:20:59.110687   64625 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 01:20:59.115284   64625 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1101 01:20:59.115337   64625 kubeadm.go:404] StartCluster: {Name:newest-cni-816754 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:newest-cni-816754 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.148 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mini
kube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 01:20:59.115439   64625 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 01:20:59.115505   64625 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:20:59.159819   64625 cri.go:89] found id: ""
	I1101 01:20:59.159974   64625 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 01:20:59.169450   64625 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:20:59.178882   64625 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:20:59.188155   64625 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:20:59.188209   64625 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1101 01:20:59.590963   64625 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 01:21:11.988833   64625 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1101 01:21:11.988901   64625 kubeadm.go:322] [preflight] Running pre-flight checks
	I1101 01:21:11.988999   64625 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 01:21:11.989108   64625 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 01:21:11.989223   64625 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 01:21:11.989318   64625 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 01:21:11.991308   64625 out.go:204]   - Generating certificates and keys ...
	I1101 01:21:11.991399   64625 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1101 01:21:11.991486   64625 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1101 01:21:11.991579   64625 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 01:21:11.991647   64625 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1101 01:21:11.991731   64625 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1101 01:21:11.991800   64625 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1101 01:21:11.991874   64625 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1101 01:21:11.992064   64625 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-816754] and IPs [192.168.39.148 127.0.0.1 ::1]
	I1101 01:21:11.992150   64625 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1101 01:21:11.992333   64625 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-816754] and IPs [192.168.39.148 127.0.0.1 ::1]
	I1101 01:21:11.992441   64625 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 01:21:11.992522   64625 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 01:21:11.992591   64625 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1101 01:21:11.992671   64625 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 01:21:11.992758   64625 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 01:21:11.992809   64625 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 01:21:11.992863   64625 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 01:21:11.992917   64625 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 01:21:11.993023   64625 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 01:21:11.993118   64625 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 01:21:11.995043   64625 out.go:204]   - Booting up control plane ...
	I1101 01:21:11.995169   64625 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 01:21:11.995282   64625 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 01:21:11.995372   64625 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 01:21:11.995565   64625 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 01:21:11.995756   64625 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 01:21:11.995823   64625 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1101 01:21:11.996088   64625 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 01:21:11.996194   64625 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504715 seconds
	I1101 01:21:11.996313   64625 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 01:21:11.996480   64625 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 01:21:11.996576   64625 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 01:21:11.996823   64625 kubeadm.go:322] [mark-control-plane] Marking the node newest-cni-816754 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 01:21:11.996909   64625 kubeadm.go:322] [bootstrap-token] Using token: k5qo7m.j8zm1wwr1uavtb5c
	I1101 01:21:11.998544   64625 out.go:204]   - Configuring RBAC rules ...
	I1101 01:21:11.998694   64625 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 01:21:11.998823   64625 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 01:21:11.999017   64625 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 01:21:11.999185   64625 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 01:21:11.999312   64625 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 01:21:11.999422   64625 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 01:21:11.999589   64625 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 01:21:11.999663   64625 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1101 01:21:11.999730   64625 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1101 01:21:11.999741   64625 kubeadm.go:322] 
	I1101 01:21:11.999818   64625 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1101 01:21:11.999827   64625 kubeadm.go:322] 
	I1101 01:21:11.999943   64625 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1101 01:21:11.999955   64625 kubeadm.go:322] 
	I1101 01:21:11.999995   64625 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1101 01:21:12.000084   64625 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 01:21:12.000152   64625 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 01:21:12.000161   64625 kubeadm.go:322] 
	I1101 01:21:12.000241   64625 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1101 01:21:12.000307   64625 kubeadm.go:322] 
	I1101 01:21:12.000449   64625 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 01:21:12.000464   64625 kubeadm.go:322] 
	I1101 01:21:12.000532   64625 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1101 01:21:12.000641   64625 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 01:21:12.000736   64625 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 01:21:12.000750   64625 kubeadm.go:322] 
	I1101 01:21:12.000868   64625 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 01:21:12.000984   64625 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1101 01:21:12.000995   64625 kubeadm.go:322] 
	I1101 01:21:12.001102   64625 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token k5qo7m.j8zm1wwr1uavtb5c \
	I1101 01:21:12.001243   64625 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 \
	I1101 01:21:12.001281   64625 kubeadm.go:322] 	--control-plane 
	I1101 01:21:12.001287   64625 kubeadm.go:322] 
	I1101 01:21:12.001394   64625 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1101 01:21:12.001411   64625 kubeadm.go:322] 
	I1101 01:21:12.001512   64625 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token k5qo7m.j8zm1wwr1uavtb5c \
	I1101 01:21:12.001663   64625 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 
	I1101 01:21:12.001683   64625 cni.go:84] Creating CNI manager for ""
	I1101 01:21:12.001693   64625 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:21:12.003603   64625 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:21:12.005194   64625 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:21:12.072709   64625 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:21:12.130203   64625 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 01:21:12.130270   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:12.130285   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9 minikube.k8s.io/name=newest-cni-816754 minikube.k8s.io/updated_at=2023_11_01T01_21_12_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:12.199829   64625 ops.go:34] apiserver oom_adj: -16
	I1101 01:21:12.420526   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:12.515267   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:13.120175   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:13.619679   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:14.120479   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:14.620503   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:15.120335   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:15.620057   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:16.119750   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:16.620284   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:17.119768   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:17.619605   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:18.119966   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:18.620188   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:19.120584   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:19.620170   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:20.120366   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:20.619569   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:21.119776   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:21.619729   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:22.120460   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:22.620567   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:23.119747   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:23.620355   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:24.119545   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:24.258378   64625 kubeadm.go:1081] duration metric: took 12.128163526s to wait for elevateKubeSystemPrivileges.
	I1101 01:21:24.258408   64625 kubeadm.go:406] StartCluster complete in 25.143076229s
	I1101 01:21:24.258431   64625 settings.go:142] acquiring lock: {Name:mk7f269e64dfd8d176737f993e01f6e6badafbc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:21:24.258527   64625 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 01:21:24.260239   64625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/kubeconfig: {Name:mk08da65b6c71084e1cfafb19800038e8c8303e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:21:24.260511   64625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 01:21:24.260626   64625 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1101 01:21:24.260717   64625 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-816754"
	I1101 01:21:24.260739   64625 addons.go:231] Setting addon storage-provisioner=true in "newest-cni-816754"
	I1101 01:21:24.260742   64625 config.go:182] Loaded profile config "newest-cni-816754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:21:24.260750   64625 addons.go:69] Setting default-storageclass=true in profile "newest-cni-816754"
	I1101 01:21:24.260775   64625 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-816754"
	I1101 01:21:24.260800   64625 host.go:66] Checking if "newest-cni-816754" exists ...
	I1101 01:21:24.261163   64625 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:21:24.261193   64625 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:21:24.261262   64625 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:21:24.261320   64625 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:21:24.277363   64625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43081
	I1101 01:21:24.277669   64625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42981
	I1101 01:21:24.277828   64625 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:21:24.278109   64625 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:21:24.278334   64625 main.go:141] libmachine: Using API Version  1
	I1101 01:21:24.278361   64625 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:21:24.278602   64625 main.go:141] libmachine: Using API Version  1
	I1101 01:21:24.278628   64625 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:21:24.278711   64625 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:21:24.278982   64625 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:21:24.279172   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetState
	I1101 01:21:24.279289   64625 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:21:24.279317   64625 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:21:24.282651   64625 addons.go:231] Setting addon default-storageclass=true in "newest-cni-816754"
	I1101 01:21:24.282697   64625 host.go:66] Checking if "newest-cni-816754" exists ...
	I1101 01:21:24.283032   64625 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:21:24.283080   64625 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:21:24.295415   64625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39813
	I1101 01:21:24.295903   64625 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:21:24.296366   64625 main.go:141] libmachine: Using API Version  1
	I1101 01:21:24.296395   64625 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:21:24.296711   64625 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:21:24.296913   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetState
	I1101 01:21:24.298845   64625 main.go:141] libmachine: (newest-cni-816754) Calling .DriverName
	I1101 01:21:24.300805   64625 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:21:24.300219   64625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35859
	I1101 01:21:24.302287   64625 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:21:24.302303   64625 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 01:21:24.302323   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHHostname
	I1101 01:21:24.302881   64625 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:21:24.303353   64625 main.go:141] libmachine: Using API Version  1
	I1101 01:21:24.303373   64625 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:21:24.303749   64625 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:21:24.304951   64625 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:21:24.305016   64625 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:21:24.306196   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:21:24.308728   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHPort
	I1101 01:21:24.308797   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:21:24.308821   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:21:24.308893   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:21:24.309103   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHUsername
	I1101 01:21:24.309211   64625 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754/id_rsa Username:docker}
	I1101 01:21:24.320232   64625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41773
	I1101 01:21:24.320661   64625 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:21:24.321175   64625 main.go:141] libmachine: Using API Version  1
	I1101 01:21:24.321202   64625 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:21:24.321495   64625 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:21:24.321747   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetState
	I1101 01:21:24.323350   64625 main.go:141] libmachine: (newest-cni-816754) Calling .DriverName
	I1101 01:21:24.323631   64625 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 01:21:24.323649   64625 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 01:21:24.323666   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHHostname
	I1101 01:21:24.326486   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:21:24.326888   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:21:24.326905   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:21:24.327077   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHPort
	I1101 01:21:24.327242   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:21:24.327364   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHUsername
	I1101 01:21:24.327490   64625 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754/id_rsa Username:docker}
	I1101 01:21:24.365496   64625 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-816754" context rescaled to 1 replicas
	I1101 01:21:24.365535   64625 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.148 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 01:21:24.367797   64625 out.go:177] * Verifying Kubernetes components...
	I1101 01:21:24.369087   64625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:21:24.487575   64625 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:21:24.509481   64625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 01:21:24.510739   64625 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:21:24.510797   64625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:21:24.532021   64625 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 01:21:26.175134   64625 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.687510841s)
	I1101 01:21:26.175178   64625 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.664368135s)
	I1101 01:21:26.175189   64625 main.go:141] libmachine: Making call to close driver server
	I1101 01:21:26.175194   64625 api_server.go:72] duration metric: took 1.809638509s to wait for apiserver process to appear ...
	I1101 01:21:26.175200   64625 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:21:26.175203   64625 main.go:141] libmachine: (newest-cni-816754) Calling .Close
	I1101 01:21:26.175212   64625 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I1101 01:21:26.175147   64625 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.665628435s)
	I1101 01:21:26.175279   64625 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1101 01:21:26.175355   64625 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.643304616s)
	I1101 01:21:26.175395   64625 main.go:141] libmachine: Making call to close driver server
	I1101 01:21:26.175410   64625 main.go:141] libmachine: (newest-cni-816754) Calling .Close
	I1101 01:21:26.175519   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Closing plugin on server side
	I1101 01:21:26.175541   64625 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:21:26.175556   64625 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:21:26.175569   64625 main.go:141] libmachine: Making call to close driver server
	I1101 01:21:26.175577   64625 main.go:141] libmachine: (newest-cni-816754) Calling .Close
	I1101 01:21:26.175652   64625 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:21:26.175689   64625 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:21:26.175719   64625 main.go:141] libmachine: Making call to close driver server
	I1101 01:21:26.175742   64625 main.go:141] libmachine: (newest-cni-816754) Calling .Close
	I1101 01:21:26.176270   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Closing plugin on server side
	I1101 01:21:26.176291   64625 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:21:26.176306   64625 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:21:26.176309   64625 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:21:26.176312   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Closing plugin on server side
	I1101 01:21:26.176327   64625 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:21:26.185699   64625 api_server.go:279] https://192.168.39.148:8443/healthz returned 200:
	ok
	I1101 01:21:26.192750   64625 api_server.go:141] control plane version: v1.28.3
	I1101 01:21:26.192775   64625 api_server.go:131] duration metric: took 17.570377ms to wait for apiserver health ...
	I1101 01:21:26.192783   64625 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:21:26.200814   64625 main.go:141] libmachine: Making call to close driver server
	I1101 01:21:26.200837   64625 main.go:141] libmachine: (newest-cni-816754) Calling .Close
	I1101 01:21:26.201127   64625 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:21:26.201146   64625 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:21:26.201162   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Closing plugin on server side
	I1101 01:21:26.203187   64625 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1101 01:21:26.204870   64625 addons.go:502] enable addons completed in 1.944241081s: enabled=[storage-provisioner default-storageclass]
	I1101 01:21:26.207399   64625 system_pods.go:59] 8 kube-system pods found
	I1101 01:21:26.207437   64625 system_pods.go:61] "coredns-5dd5756b68-2v29v" [1af9d35f-627b-46a0-8d7b-f970cb448084] Failed / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 01:21:26.207446   64625 system_pods.go:61] "coredns-5dd5756b68-pjc72" [0b1337e1-3343-48cf-b3cc-7dccd56ef81f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 01:21:26.207451   64625 system_pods.go:61] "etcd-newest-cni-816754" [30549252-692f-44bb-8392-1a8e6cc2685b] Running
	I1101 01:21:26.207457   64625 system_pods.go:61] "kube-apiserver-newest-cni-816754" [a4cf18d6-3524-47fb-a3ea-a475e569e48b] Running
	I1101 01:21:26.207462   64625 system_pods.go:61] "kube-controller-manager-newest-cni-816754" [fb7fd047-d2f5-4d1f-8086-ccea3cd6c459] Running
	I1101 01:21:26.207470   64625 system_pods.go:61] "kube-proxy-xxn8q" [1bcb7d64-dfbb-43be-8256-55985e4a40ed] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 01:21:26.207477   64625 system_pods.go:61] "kube-scheduler-newest-cni-816754" [9cff66a2-d34d-4575-aa54-1e43d076e3c9] Running
	I1101 01:21:26.207489   64625 system_pods.go:61] "storage-provisioner" [ec5f4b42-5ff3-4a75-a37a-a41758482954] Pending
	I1101 01:21:26.207496   64625 system_pods.go:74] duration metric: took 14.707373ms to wait for pod list to return data ...
	I1101 01:21:26.207517   64625 default_sa.go:34] waiting for default service account to be created ...
	I1101 01:21:26.216725   64625 default_sa.go:45] found service account: "default"
	I1101 01:21:26.216754   64625 default_sa.go:55] duration metric: took 9.230149ms for default service account to be created ...
	I1101 01:21:26.216763   64625 kubeadm.go:581] duration metric: took 1.851207788s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I1101 01:21:26.216784   64625 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:21:26.222848   64625 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:21:26.222890   64625 node_conditions.go:123] node cpu capacity is 2
	I1101 01:21:26.222906   64625 node_conditions.go:105] duration metric: took 6.11587ms to run NodePressure ...
	I1101 01:21:26.222920   64625 start.go:228] waiting for startup goroutines ...
	I1101 01:21:26.222929   64625 start.go:233] waiting for cluster config update ...
	I1101 01:21:26.222942   64625 start.go:242] writing updated cluster config ...
	I1101 01:21:26.223289   64625 ssh_runner.go:195] Run: rm -f paused
	I1101 01:21:26.286087   64625 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1101 01:21:26.287849   64625 out.go:177] * Done! kubectl is now configured to use "newest-cni-816754" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-11-01 01:00:04 UTC, ends at Wed 2023-11-01 01:21:27 UTC. --
	Nov 01 01:21:27 embed-certs-754132 crio[723]: time="2023-11-01 01:21:27.514938072Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698801687514904314,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=67d3272e-d835-463d-8dca-ef49a1e5d601 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:21:27 embed-certs-754132 crio[723]: time="2023-11-01 01:21:27.515797287Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=03d46e8e-9475-435c-a63f-c3821950820a name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:21:27 embed-certs-754132 crio[723]: time="2023-11-01 01:21:27.515882635Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=03d46e8e-9475-435c-a63f-c3821950820a name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:21:27 embed-certs-754132 crio[723]: time="2023-11-01 01:21:27.516092985Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d28340698815c870c266c1c350e03df688140bcf1e135a7004963522db855047,PodSandboxId:d409231bf8be0dd660b5e18385ae9020caa81c6fe3d741e10350cc41ebd2e242,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698800740349558304,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7feb8931-83d0-4968-a295-a4202e8fc8c3,},Annotations:map[string]string{io.kubernetes.container.hash: 27446c8b,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1c58dca73e3cce0160cffb2a2ca266c63aaf632986703e915acd2f8e56f7b77,PodSandboxId:84cf7b9fd7aa639d07509a9df07d14db06e3f176a750a4e49a27ab5fea5978de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698800740234980877,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cwbfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7f5ba1e-bd63-456b-94cc-0e2c121b7792,},Annotations:map[string]string{io.kubernetes.container.hash: aaa212e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a2f7f37e23492d7c891917cbc79897ac18943c36594fedf027550d2f6b006ed,PodSandboxId:e245796915c72f2ae4030a1d8a8cd6db1edb8e02c0481b1f2e1d6d7dc22659f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698800739840337972,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6kqbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e03e6370-35d1-4438-8b18-d62b0a253ea6,},Annotations:map[string]string{io.kubernetes.container.hash: 515528ed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61170a3d73c795f3fe15b7ec6a56f67d0bbde0572c053b74e74ee78d2e13ce96,PodSandboxId:b78c4d5b084d1480831966342f00b5efe25a5e80e2a41cfdeb05a02c460eed3b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698800716261746108,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-754132,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 0a6ee9577f47faf2fcc83cf18cc76050,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:272478e18337c09b38f96f1b3110d25b51954a59ff295ca6699f743b27b0e20d,PodSandboxId:b5d6ed323107ae773936ce033ed465f9ed6cbbaaa2686cf4dc10348c782c761c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698800715830550356,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-754132,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d8dc3bb5d9b817ec64d94b3b634f0ac,},Annotations:
map[string]string{io.kubernetes.container.hash: 79674182,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cae13d3fdeec1e5275a2e5c1d1a9fc6af5ba238cf5ec981846cec6711a32c7ea,PodSandboxId:19bfc042d89ec7a403f3d85c6495c44be359ae22feff45d85c7efa92ff8af12d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698800715683397207,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-754132,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de7577929e1604837b75
088bed2286c,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f00b1e4feeac20c5deb9f138667fe2d11217e3e417d8c54a269693561f3529f6,PodSandboxId:bdb69288d9d92d54c9e97730b1d05f62324d0e95fe9792109e5a3d41d8a46e22,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698800715655681571,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-754132,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc86b9788e9fe6115f54b92ff1ed7d8
7,},Annotations:map[string]string{io.kubernetes.container.hash: d73d1b25,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=03d46e8e-9475-435c-a63f-c3821950820a name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:21:27 embed-certs-754132 crio[723]: time="2023-11-01 01:21:27.562232281Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=5910deb3-91c5-4455-b6d6-f9f96786ce56 name=/runtime.v1.RuntimeService/Version
	Nov 01 01:21:27 embed-certs-754132 crio[723]: time="2023-11-01 01:21:27.562370964Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=5910deb3-91c5-4455-b6d6-f9f96786ce56 name=/runtime.v1.RuntimeService/Version
	Nov 01 01:21:27 embed-certs-754132 crio[723]: time="2023-11-01 01:21:27.564027548Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=411e5d9a-ddba-4f69-baef-4447e91a528f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:21:27 embed-certs-754132 crio[723]: time="2023-11-01 01:21:27.564732245Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698801687564706888,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=411e5d9a-ddba-4f69-baef-4447e91a528f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:21:27 embed-certs-754132 crio[723]: time="2023-11-01 01:21:27.565614096Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=21c82f66-e507-4efc-b363-c505a3ae1027 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:21:27 embed-certs-754132 crio[723]: time="2023-11-01 01:21:27.565681838Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=21c82f66-e507-4efc-b363-c505a3ae1027 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:21:27 embed-certs-754132 crio[723]: time="2023-11-01 01:21:27.565901335Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d28340698815c870c266c1c350e03df688140bcf1e135a7004963522db855047,PodSandboxId:d409231bf8be0dd660b5e18385ae9020caa81c6fe3d741e10350cc41ebd2e242,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698800740349558304,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7feb8931-83d0-4968-a295-a4202e8fc8c3,},Annotations:map[string]string{io.kubernetes.container.hash: 27446c8b,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1c58dca73e3cce0160cffb2a2ca266c63aaf632986703e915acd2f8e56f7b77,PodSandboxId:84cf7b9fd7aa639d07509a9df07d14db06e3f176a750a4e49a27ab5fea5978de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698800740234980877,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cwbfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7f5ba1e-bd63-456b-94cc-0e2c121b7792,},Annotations:map[string]string{io.kubernetes.container.hash: aaa212e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a2f7f37e23492d7c891917cbc79897ac18943c36594fedf027550d2f6b006ed,PodSandboxId:e245796915c72f2ae4030a1d8a8cd6db1edb8e02c0481b1f2e1d6d7dc22659f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698800739840337972,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6kqbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e03e6370-35d1-4438-8b18-d62b0a253ea6,},Annotations:map[string]string{io.kubernetes.container.hash: 515528ed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61170a3d73c795f3fe15b7ec6a56f67d0bbde0572c053b74e74ee78d2e13ce96,PodSandboxId:b78c4d5b084d1480831966342f00b5efe25a5e80e2a41cfdeb05a02c460eed3b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698800716261746108,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-754132,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 0a6ee9577f47faf2fcc83cf18cc76050,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:272478e18337c09b38f96f1b3110d25b51954a59ff295ca6699f743b27b0e20d,PodSandboxId:b5d6ed323107ae773936ce033ed465f9ed6cbbaaa2686cf4dc10348c782c761c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698800715830550356,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-754132,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d8dc3bb5d9b817ec64d94b3b634f0ac,},Annotations:
map[string]string{io.kubernetes.container.hash: 79674182,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cae13d3fdeec1e5275a2e5c1d1a9fc6af5ba238cf5ec981846cec6711a32c7ea,PodSandboxId:19bfc042d89ec7a403f3d85c6495c44be359ae22feff45d85c7efa92ff8af12d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698800715683397207,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-754132,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de7577929e1604837b75
088bed2286c,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f00b1e4feeac20c5deb9f138667fe2d11217e3e417d8c54a269693561f3529f6,PodSandboxId:bdb69288d9d92d54c9e97730b1d05f62324d0e95fe9792109e5a3d41d8a46e22,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698800715655681571,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-754132,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc86b9788e9fe6115f54b92ff1ed7d8
7,},Annotations:map[string]string{io.kubernetes.container.hash: d73d1b25,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=21c82f66-e507-4efc-b363-c505a3ae1027 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:21:27 embed-certs-754132 crio[723]: time="2023-11-01 01:21:27.618970939Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=28f9679b-a3ec-4020-a3ea-12e84cde4683 name=/runtime.v1.RuntimeService/Version
	Nov 01 01:21:27 embed-certs-754132 crio[723]: time="2023-11-01 01:21:27.619077154Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=28f9679b-a3ec-4020-a3ea-12e84cde4683 name=/runtime.v1.RuntimeService/Version
	Nov 01 01:21:27 embed-certs-754132 crio[723]: time="2023-11-01 01:21:27.620434100Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d2ce8f06-43ba-4cdf-b11b-dacc56b93cc8 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:21:27 embed-certs-754132 crio[723]: time="2023-11-01 01:21:27.620861181Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698801687620845363,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=d2ce8f06-43ba-4cdf-b11b-dacc56b93cc8 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:21:27 embed-certs-754132 crio[723]: time="2023-11-01 01:21:27.621558783Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6cea872e-2bfb-4765-8ad1-b940796ee2da name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:21:27 embed-certs-754132 crio[723]: time="2023-11-01 01:21:27.621629045Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6cea872e-2bfb-4765-8ad1-b940796ee2da name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:21:27 embed-certs-754132 crio[723]: time="2023-11-01 01:21:27.621795816Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d28340698815c870c266c1c350e03df688140bcf1e135a7004963522db855047,PodSandboxId:d409231bf8be0dd660b5e18385ae9020caa81c6fe3d741e10350cc41ebd2e242,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698800740349558304,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7feb8931-83d0-4968-a295-a4202e8fc8c3,},Annotations:map[string]string{io.kubernetes.container.hash: 27446c8b,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1c58dca73e3cce0160cffb2a2ca266c63aaf632986703e915acd2f8e56f7b77,PodSandboxId:84cf7b9fd7aa639d07509a9df07d14db06e3f176a750a4e49a27ab5fea5978de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698800740234980877,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cwbfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7f5ba1e-bd63-456b-94cc-0e2c121b7792,},Annotations:map[string]string{io.kubernetes.container.hash: aaa212e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a2f7f37e23492d7c891917cbc79897ac18943c36594fedf027550d2f6b006ed,PodSandboxId:e245796915c72f2ae4030a1d8a8cd6db1edb8e02c0481b1f2e1d6d7dc22659f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698800739840337972,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6kqbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e03e6370-35d1-4438-8b18-d62b0a253ea6,},Annotations:map[string]string{io.kubernetes.container.hash: 515528ed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61170a3d73c795f3fe15b7ec6a56f67d0bbde0572c053b74e74ee78d2e13ce96,PodSandboxId:b78c4d5b084d1480831966342f00b5efe25a5e80e2a41cfdeb05a02c460eed3b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698800716261746108,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-754132,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 0a6ee9577f47faf2fcc83cf18cc76050,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:272478e18337c09b38f96f1b3110d25b51954a59ff295ca6699f743b27b0e20d,PodSandboxId:b5d6ed323107ae773936ce033ed465f9ed6cbbaaa2686cf4dc10348c782c761c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698800715830550356,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-754132,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d8dc3bb5d9b817ec64d94b3b634f0ac,},Annotations:
map[string]string{io.kubernetes.container.hash: 79674182,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cae13d3fdeec1e5275a2e5c1d1a9fc6af5ba238cf5ec981846cec6711a32c7ea,PodSandboxId:19bfc042d89ec7a403f3d85c6495c44be359ae22feff45d85c7efa92ff8af12d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698800715683397207,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-754132,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de7577929e1604837b75
088bed2286c,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f00b1e4feeac20c5deb9f138667fe2d11217e3e417d8c54a269693561f3529f6,PodSandboxId:bdb69288d9d92d54c9e97730b1d05f62324d0e95fe9792109e5a3d41d8a46e22,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698800715655681571,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-754132,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc86b9788e9fe6115f54b92ff1ed7d8
7,},Annotations:map[string]string{io.kubernetes.container.hash: d73d1b25,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6cea872e-2bfb-4765-8ad1-b940796ee2da name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:21:27 embed-certs-754132 crio[723]: time="2023-11-01 01:21:27.663475096Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=aa831ea4-41c0-45be-8b7d-e1fd609a20ac name=/runtime.v1.RuntimeService/Version
	Nov 01 01:21:27 embed-certs-754132 crio[723]: time="2023-11-01 01:21:27.663565180Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=aa831ea4-41c0-45be-8b7d-e1fd609a20ac name=/runtime.v1.RuntimeService/Version
	Nov 01 01:21:27 embed-certs-754132 crio[723]: time="2023-11-01 01:21:27.665101343Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=2ec11f0b-4780-4e58-b6cc-9005ac3d995f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:21:27 embed-certs-754132 crio[723]: time="2023-11-01 01:21:27.665563595Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698801687665549587,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=2ec11f0b-4780-4e58-b6cc-9005ac3d995f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:21:27 embed-certs-754132 crio[723]: time="2023-11-01 01:21:27.666389614Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7c9f7287-b8c1-4944-822d-42fc1b4b57b0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:21:27 embed-certs-754132 crio[723]: time="2023-11-01 01:21:27.666456376Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7c9f7287-b8c1-4944-822d-42fc1b4b57b0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:21:27 embed-certs-754132 crio[723]: time="2023-11-01 01:21:27.666611523Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d28340698815c870c266c1c350e03df688140bcf1e135a7004963522db855047,PodSandboxId:d409231bf8be0dd660b5e18385ae9020caa81c6fe3d741e10350cc41ebd2e242,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698800740349558304,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7feb8931-83d0-4968-a295-a4202e8fc8c3,},Annotations:map[string]string{io.kubernetes.container.hash: 27446c8b,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1c58dca73e3cce0160cffb2a2ca266c63aaf632986703e915acd2f8e56f7b77,PodSandboxId:84cf7b9fd7aa639d07509a9df07d14db06e3f176a750a4e49a27ab5fea5978de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698800740234980877,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cwbfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7f5ba1e-bd63-456b-94cc-0e2c121b7792,},Annotations:map[string]string{io.kubernetes.container.hash: aaa212e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a2f7f37e23492d7c891917cbc79897ac18943c36594fedf027550d2f6b006ed,PodSandboxId:e245796915c72f2ae4030a1d8a8cd6db1edb8e02c0481b1f2e1d6d7dc22659f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698800739840337972,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6kqbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e03e6370-35d1-4438-8b18-d62b0a253ea6,},Annotations:map[string]string{io.kubernetes.container.hash: 515528ed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61170a3d73c795f3fe15b7ec6a56f67d0bbde0572c053b74e74ee78d2e13ce96,PodSandboxId:b78c4d5b084d1480831966342f00b5efe25a5e80e2a41cfdeb05a02c460eed3b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698800716261746108,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-754132,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 0a6ee9577f47faf2fcc83cf18cc76050,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:272478e18337c09b38f96f1b3110d25b51954a59ff295ca6699f743b27b0e20d,PodSandboxId:b5d6ed323107ae773936ce033ed465f9ed6cbbaaa2686cf4dc10348c782c761c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698800715830550356,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-754132,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d8dc3bb5d9b817ec64d94b3b634f0ac,},Annotations:
map[string]string{io.kubernetes.container.hash: 79674182,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cae13d3fdeec1e5275a2e5c1d1a9fc6af5ba238cf5ec981846cec6711a32c7ea,PodSandboxId:19bfc042d89ec7a403f3d85c6495c44be359ae22feff45d85c7efa92ff8af12d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698800715683397207,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-754132,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de7577929e1604837b75
088bed2286c,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f00b1e4feeac20c5deb9f138667fe2d11217e3e417d8c54a269693561f3529f6,PodSandboxId:bdb69288d9d92d54c9e97730b1d05f62324d0e95fe9792109e5a3d41d8a46e22,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698800715655681571,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-754132,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc86b9788e9fe6115f54b92ff1ed7d8
7,},Annotations:map[string]string{io.kubernetes.container.hash: d73d1b25,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7c9f7287-b8c1-4944-822d-42fc1b4b57b0 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d28340698815c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   d409231bf8be0       storage-provisioner
	a1c58dca73e3c       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   15 minutes ago      Running             kube-proxy                0                   84cf7b9fd7aa6       kube-proxy-cwbfz
	6a2f7f37e2349       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   15 minutes ago      Running             coredns                   0                   e245796915c72       coredns-5dd5756b68-6kqbc
	61170a3d73c79       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   16 minutes ago      Running             kube-scheduler            2                   b78c4d5b084d1       kube-scheduler-embed-certs-754132
	272478e18337c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   16 minutes ago      Running             etcd                      2                   b5d6ed323107a       etcd-embed-certs-754132
	cae13d3fdeec1       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   16 minutes ago      Running             kube-controller-manager   2                   19bfc042d89ec       kube-controller-manager-embed-certs-754132
	f00b1e4feeac2       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   16 minutes ago      Running             kube-apiserver            2                   bdb69288d9d92       kube-apiserver-embed-certs-754132
	
	* 
	* ==> coredns [6a2f7f37e23492d7c891917cbc79897ac18943c36594fedf027550d2f6b006ed] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-754132
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-754132
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9
	                    minikube.k8s.io/name=embed-certs-754132
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_01T01_05_23_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Nov 2023 01:05:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-754132
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Nov 2023 01:21:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Nov 2023 01:21:02 +0000   Wed, 01 Nov 2023 01:05:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Nov 2023 01:21:02 +0000   Wed, 01 Nov 2023 01:05:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Nov 2023 01:21:02 +0000   Wed, 01 Nov 2023 01:05:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Nov 2023 01:21:02 +0000   Wed, 01 Nov 2023 01:05:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.83
	  Hostname:    embed-certs-754132
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 b3f56a2074104288a4dc0065652f0242
	  System UUID:                b3f56a20-7410-4288-a4dc-0065652f0242
	  Boot ID:                    70f715ef-1758-4a32-8563-70324dc16d05
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-6kqbc                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-embed-certs-754132                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-embed-certs-754132             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-embed-certs-754132    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-cwbfz                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-embed-certs-754132             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-57f55c9bc5-499xs               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node embed-certs-754132 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node embed-certs-754132 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node embed-certs-754132 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             16m   kubelet          Node embed-certs-754132 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                16m   kubelet          Node embed-certs-754132 status is now: NodeReady
	  Normal  RegisteredNode           15m   node-controller  Node embed-certs-754132 event: Registered Node embed-certs-754132 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov 1 00:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.065147] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Nov 1 01:00] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.759416] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.136094] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.391126] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.783508] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.110039] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.156651] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.124192] systemd-fstab-generator[684]: Ignoring "noauto" for root device
	[  +0.242489] systemd-fstab-generator[708]: Ignoring "noauto" for root device
	[ +17.096705] systemd-fstab-generator[921]: Ignoring "noauto" for root device
	[ +22.496796] kauditd_printk_skb: 34 callbacks suppressed
	[  +3.134284] hrtimer: interrupt took 2781045 ns
	[Nov 1 01:05] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.099390] systemd-fstab-generator[3663]: Ignoring "noauto" for root device
	[  +9.298397] systemd-fstab-generator[3988]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [272478e18337c09b38f96f1b3110d25b51954a59ff295ca6699f743b27b0e20d] <==
	* {"level":"info","ts":"2023-11-01T01:05:17.828221Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1706423cc6d0face is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-01T01:05:17.828451Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1706423cc6d0face became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-01T01:05:17.828489Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1706423cc6d0face received MsgPreVoteResp from 1706423cc6d0face at term 1"}
	{"level":"info","ts":"2023-11-01T01:05:17.828561Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1706423cc6d0face became candidate at term 2"}
	{"level":"info","ts":"2023-11-01T01:05:17.828576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1706423cc6d0face received MsgVoteResp from 1706423cc6d0face at term 2"}
	{"level":"info","ts":"2023-11-01T01:05:17.828589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1706423cc6d0face became leader at term 2"}
	{"level":"info","ts":"2023-11-01T01:05:17.828596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1706423cc6d0face elected leader 1706423cc6d0face at term 2"}
	{"level":"info","ts":"2023-11-01T01:05:17.830335Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T01:05:17.831077Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"1706423cc6d0face","local-member-attributes":"{Name:embed-certs-754132 ClientURLs:[https://192.168.61.83:2379]}","request-path":"/0/members/1706423cc6d0face/attributes","cluster-id":"bef7c63622dde9b5","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-01T01:05:17.831617Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-01T01:05:17.834558Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-01T01:05:17.835Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-01T01:05:17.837331Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-01T01:05:17.837375Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-01T01:05:17.83764Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.83:2379"}
	{"level":"info","ts":"2023-11-01T01:05:17.839553Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bef7c63622dde9b5","local-member-id":"1706423cc6d0face","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T01:05:17.839666Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T01:05:17.839717Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T01:15:17.879193Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":677}
	{"level":"info","ts":"2023-11-01T01:15:17.881782Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":677,"took":"2.175116ms","hash":790491313}
	{"level":"info","ts":"2023-11-01T01:15:17.881859Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":790491313,"revision":677,"compact-revision":-1}
	{"level":"info","ts":"2023-11-01T01:20:17.886925Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":920}
	{"level":"info","ts":"2023-11-01T01:20:17.889603Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":920,"took":"1.985786ms","hash":1606104414}
	{"level":"info","ts":"2023-11-01T01:20:17.889698Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1606104414,"revision":920,"compact-revision":677}
	{"level":"info","ts":"2023-11-01T01:21:00.31186Z","caller":"traceutil/trace.go:171","msg":"trace[1032385708] transaction","detail":"{read_only:false; response_revision:1199; number_of_response:1; }","duration":"231.423072ms","start":"2023-11-01T01:21:00.080378Z","end":"2023-11-01T01:21:00.311801Z","steps":["trace[1032385708] 'process raft request'  (duration: 231.291753ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  01:21:28 up 21 min,  0 users,  load average: 0.17, 0.24, 0.26
	Linux embed-certs-754132 5.10.57 #1 SMP Tue Oct 31 22:14:31 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [f00b1e4feeac20c5deb9f138667fe2d11217e3e417d8c54a269693561f3529f6] <==
	* W1101 01:18:20.754955       1 handler_proxy.go:93] no RequestInfo found in the context
	E1101 01:18:20.755090       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1101 01:18:20.755131       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1101 01:19:19.650936       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1101 01:20:19.650105       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1101 01:20:19.755673       1 handler_proxy.go:93] no RequestInfo found in the context
	E1101 01:20:19.755857       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1101 01:20:19.756428       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1101 01:20:20.756220       1 handler_proxy.go:93] no RequestInfo found in the context
	W1101 01:20:20.756417       1 handler_proxy.go:93] no RequestInfo found in the context
	E1101 01:20:20.756508       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1101 01:20:20.756542       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1101 01:20:20.756418       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1101 01:20:20.758683       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1101 01:21:19.650717       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1101 01:21:20.758133       1 handler_proxy.go:93] no RequestInfo found in the context
	E1101 01:21:20.758377       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1101 01:21:20.758398       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1101 01:21:20.759648       1 handler_proxy.go:93] no RequestInfo found in the context
	E1101 01:21:20.759707       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1101 01:21:20.759724       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [cae13d3fdeec1e5275a2e5c1d1a9fc6af5ba238cf5ec981846cec6711a32c7ea] <==
	* I1101 01:15:36.473791       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:16:06.014327       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:16:06.483375       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:16:36.021619       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:16:36.493010       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1101 01:16:40.300300       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="188.975µs"
	I1101 01:16:53.307343       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="160.234µs"
	E1101 01:17:06.027323       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:17:06.501446       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:17:36.033493       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:17:36.510377       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:18:06.040510       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:18:06.527874       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:18:36.047502       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:18:36.542988       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:19:06.053878       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:19:06.551897       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:19:36.060767       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:19:36.561889       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:20:06.067712       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:20:06.570689       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:20:36.074155       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:20:36.584339       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:21:06.083780       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:21:06.594035       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [a1c58dca73e3cce0160cffb2a2ca266c63aaf632986703e915acd2f8e56f7b77] <==
	* I1101 01:05:40.727585       1 server_others.go:69] "Using iptables proxy"
	I1101 01:05:40.747590       1 node.go:141] Successfully retrieved node IP: 192.168.61.83
	I1101 01:05:40.804123       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1101 01:05:40.804166       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 01:05:40.806853       1 server_others.go:152] "Using iptables Proxier"
	I1101 01:05:40.807165       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 01:05:40.807593       1 server.go:846] "Version info" version="v1.28.3"
	I1101 01:05:40.807627       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 01:05:40.812889       1 config.go:188] "Starting service config controller"
	I1101 01:05:40.812968       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 01:05:40.813039       1 config.go:315] "Starting node config controller"
	I1101 01:05:40.813047       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 01:05:40.814651       1 config.go:97] "Starting endpoint slice config controller"
	I1101 01:05:40.814777       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 01:05:40.913203       1 shared_informer.go:318] Caches are synced for node config
	I1101 01:05:40.913203       1 shared_informer.go:318] Caches are synced for service config
	I1101 01:05:40.915505       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [61170a3d73c795f3fe15b7ec6a56f67d0bbde0572c053b74e74ee78d2e13ce96] <==
	* E1101 01:05:19.860890       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1101 01:05:19.860936       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1101 01:05:20.683176       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1101 01:05:20.683227       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1101 01:05:20.753653       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1101 01:05:20.753739       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1101 01:05:20.800588       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1101 01:05:20.800711       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1101 01:05:20.809454       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1101 01:05:20.809567       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1101 01:05:20.938074       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1101 01:05:20.938178       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1101 01:05:20.957589       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1101 01:05:20.957696       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1101 01:05:20.981604       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1101 01:05:20.981650       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 01:05:21.022807       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1101 01:05:21.022859       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1101 01:05:21.072791       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1101 01:05:21.072842       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1101 01:05:21.078846       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1101 01:05:21.078907       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1101 01:05:21.134980       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1101 01:05:21.135095       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I1101 01:05:23.938532       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-11-01 01:00:04 UTC, ends at Wed 2023-11-01 01:21:28 UTC. --
	Nov 01 01:19:00 embed-certs-754132 kubelet[3995]: E1101 01:19:00.281224    3995 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-499xs" podUID="617aecda-f132-4358-9da9-bbc4fc625da0"
	Nov 01 01:19:12 embed-certs-754132 kubelet[3995]: E1101 01:19:12.281909    3995 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-499xs" podUID="617aecda-f132-4358-9da9-bbc4fc625da0"
	Nov 01 01:19:23 embed-certs-754132 kubelet[3995]: E1101 01:19:23.319933    3995 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 01 01:19:23 embed-certs-754132 kubelet[3995]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 01 01:19:23 embed-certs-754132 kubelet[3995]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 01 01:19:23 embed-certs-754132 kubelet[3995]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 01 01:19:26 embed-certs-754132 kubelet[3995]: E1101 01:19:26.281124    3995 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-499xs" podUID="617aecda-f132-4358-9da9-bbc4fc625da0"
	Nov 01 01:19:37 embed-certs-754132 kubelet[3995]: E1101 01:19:37.281726    3995 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-499xs" podUID="617aecda-f132-4358-9da9-bbc4fc625da0"
	Nov 01 01:19:49 embed-certs-754132 kubelet[3995]: E1101 01:19:49.280693    3995 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-499xs" podUID="617aecda-f132-4358-9da9-bbc4fc625da0"
	Nov 01 01:20:02 embed-certs-754132 kubelet[3995]: E1101 01:20:02.281169    3995 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-499xs" podUID="617aecda-f132-4358-9da9-bbc4fc625da0"
	Nov 01 01:20:15 embed-certs-754132 kubelet[3995]: E1101 01:20:15.281116    3995 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-499xs" podUID="617aecda-f132-4358-9da9-bbc4fc625da0"
	Nov 01 01:20:23 embed-certs-754132 kubelet[3995]: E1101 01:20:23.323221    3995 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 01 01:20:23 embed-certs-754132 kubelet[3995]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 01 01:20:23 embed-certs-754132 kubelet[3995]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 01 01:20:23 embed-certs-754132 kubelet[3995]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 01 01:20:23 embed-certs-754132 kubelet[3995]: E1101 01:20:23.511065    3995 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Nov 01 01:20:28 embed-certs-754132 kubelet[3995]: E1101 01:20:28.281917    3995 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-499xs" podUID="617aecda-f132-4358-9da9-bbc4fc625da0"
	Nov 01 01:20:39 embed-certs-754132 kubelet[3995]: E1101 01:20:39.284025    3995 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-499xs" podUID="617aecda-f132-4358-9da9-bbc4fc625da0"
	Nov 01 01:20:54 embed-certs-754132 kubelet[3995]: E1101 01:20:54.281439    3995 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-499xs" podUID="617aecda-f132-4358-9da9-bbc4fc625da0"
	Nov 01 01:21:07 embed-certs-754132 kubelet[3995]: E1101 01:21:07.283226    3995 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-499xs" podUID="617aecda-f132-4358-9da9-bbc4fc625da0"
	Nov 01 01:21:21 embed-certs-754132 kubelet[3995]: E1101 01:21:21.281503    3995 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-499xs" podUID="617aecda-f132-4358-9da9-bbc4fc625da0"
	Nov 01 01:21:23 embed-certs-754132 kubelet[3995]: E1101 01:21:23.319681    3995 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 01 01:21:23 embed-certs-754132 kubelet[3995]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 01 01:21:23 embed-certs-754132 kubelet[3995]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 01 01:21:23 embed-certs-754132 kubelet[3995]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	* 
	* ==> storage-provisioner [d28340698815c870c266c1c350e03df688140bcf1e135a7004963522db855047] <==
	* I1101 01:05:40.553405       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 01:05:40.578831       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 01:05:40.578918       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1101 01:05:40.610239       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 01:05:40.611325       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-754132_4d090253-2492-4940-a7bc-e0a5c210e6de!
	I1101 01:05:40.612944       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"efcc0b08-4bc1-4dca-a43c-aa319d18bea1", APIVersion:"v1", ResourceVersion:"423", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-754132_4d090253-2492-4940-a7bc-e0a5c210e6de became leader
	I1101 01:05:40.712599       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-754132_4d090253-2492-4940-a7bc-e0a5c210e6de!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-754132 -n embed-certs-754132
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-754132 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-499xs
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-754132 describe pod metrics-server-57f55c9bc5-499xs
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-754132 describe pod metrics-server-57f55c9bc5-499xs: exit status 1 (74.703568ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-499xs" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-754132 describe pod metrics-server-57f55c9bc5-499xs: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (404.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (363.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1101 01:15:30.317056   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/enable-default-cni-090856/client.crt: no such file or directory
E1101 01:15:35.092060   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt: no such file or directory
E1101 01:15:47.920885   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/flannel-090856/client.crt: no such file or directory
E1101 01:15:52.798404   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/bridge-090856/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-639310 -n default-k8s-diff-port-639310
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-11-01 01:21:28.767544565 +0000 UTC m=+5862.332126560
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-639310 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-639310 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.644µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-639310 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-639310 -n default-k8s-diff-port-639310
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-639310 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-639310 logs -n 25: (1.715855332s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p flannel-090856                                      | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	| delete  | -p                                                     | disable-driver-mounts-130996 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | disable-driver-mounts-130996                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:53 UTC |
	|         | default-k8s-diff-port-639310                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-008483             | no-preload-008483            | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC | 01 Nov 23 00:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-008483                                   | no-preload-008483            | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-754132            | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC | 01 Nov 23 00:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-754132                                  | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-330042        | old-k8s-version-330042       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC | 01 Nov 23 00:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-330042                              | old-k8s-version-330042       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-639310  | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:53 UTC | 01 Nov 23 00:53 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:53 UTC |                     |
	|         | default-k8s-diff-port-639310                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-008483                  | no-preload-008483            | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-754132                 | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-008483                                   | no-preload-008483            | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC | 01 Nov 23 01:06 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| start   | -p embed-certs-754132                                  | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC | 01 Nov 23 01:05 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-330042             | old-k8s-version-330042       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-330042                              | old-k8s-version-330042       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC | 01 Nov 23 01:07 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-639310       | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:56 UTC | 01 Nov 23 01:06 UTC |
	|         | default-k8s-diff-port-639310                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| delete  | -p old-k8s-version-330042                              | old-k8s-version-330042       | jenkins | v1.32.0-beta.0 | 01 Nov 23 01:20 UTC | 01 Nov 23 01:20 UTC |
	| start   | -p newest-cni-816754 --memory=2200 --alsologtostderr   | newest-cni-816754            | jenkins | v1.32.0-beta.0 | 01 Nov 23 01:20 UTC | 01 Nov 23 01:21 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| delete  | -p no-preload-008483                                   | no-preload-008483            | jenkins | v1.32.0-beta.0 | 01 Nov 23 01:21 UTC | 01 Nov 23 01:21 UTC |
	| addons  | enable metrics-server -p newest-cni-816754             | newest-cni-816754            | jenkins | v1.32.0-beta.0 | 01 Nov 23 01:21 UTC | 01 Nov 23 01:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p newest-cni-816754                                   | newest-cni-816754            | jenkins | v1.32.0-beta.0 | 01 Nov 23 01:21 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| delete  | -p embed-certs-754132                                  | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 01:21 UTC |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/01 01:20:26
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 01:20:26.901577   64625 out.go:296] Setting OutFile to fd 1 ...
	I1101 01:20:26.901877   64625 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 01:20:26.901887   64625 out.go:309] Setting ErrFile to fd 2...
	I1101 01:20:26.901895   64625 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 01:20:26.902108   64625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7305/.minikube/bin
	I1101 01:20:26.902738   64625 out.go:303] Setting JSON to false
	I1101 01:20:26.903795   64625 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7372,"bootTime":1698794255,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 01:20:26.903867   64625 start.go:138] virtualization: kvm guest
	I1101 01:20:26.906222   64625 out.go:177] * [newest-cni-816754] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1101 01:20:26.907419   64625 out.go:177]   - MINIKUBE_LOCATION=17486
	I1101 01:20:26.907510   64625 notify.go:220] Checking for updates...
	I1101 01:20:26.908569   64625 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 01:20:26.909780   64625 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 01:20:26.911040   64625 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7305/.minikube
	I1101 01:20:26.912350   64625 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 01:20:26.913709   64625 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 01:20:26.915573   64625 config.go:182] Loaded profile config "default-k8s-diff-port-639310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:20:26.915683   64625 config.go:182] Loaded profile config "embed-certs-754132": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:20:26.915774   64625 config.go:182] Loaded profile config "no-preload-008483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:20:26.915865   64625 driver.go:378] Setting default libvirt URI to qemu:///system
	I1101 01:20:26.957402   64625 out.go:177] * Using the kvm2 driver based on user configuration
	I1101 01:20:26.959160   64625 start.go:298] selected driver: kvm2
	I1101 01:20:26.959182   64625 start.go:902] validating driver "kvm2" against <nil>
	I1101 01:20:26.959194   64625 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 01:20:26.959984   64625 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 01:20:26.960073   64625 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17486-7305/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1101 01:20:26.976448   64625 install.go:137] /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1101 01:20:26.976536   64625 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	W1101 01:20:26.976585   64625 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1101 01:20:26.976861   64625 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 01:20:26.976937   64625 cni.go:84] Creating CNI manager for ""
	I1101 01:20:26.976955   64625 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:20:26.976973   64625 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1101 01:20:26.976985   64625 start_flags.go:323] config:
	{Name:newest-cni-816754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-816754 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 01:20:26.977171   64625 iso.go:125] acquiring lock: {Name:mk1f649ca0b7c1ae293cd66cb85f9eeda028b20b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 01:20:26.979480   64625 out.go:177] * Starting control plane node newest-cni-816754 in cluster newest-cni-816754
	I1101 01:20:26.981114   64625 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 01:20:26.981177   64625 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1101 01:20:26.981212   64625 cache.go:56] Caching tarball of preloaded images
	I1101 01:20:26.981372   64625 preload.go:174] Found /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 01:20:26.981391   64625 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1101 01:20:26.981513   64625 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/config.json ...
	I1101 01:20:26.981539   64625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/config.json: {Name:mk93f245040cb932920ceaccd9b3116731eb7701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:20:26.981715   64625 start.go:365] acquiring machines lock for newest-cni-816754: {Name:mk7aad88408c319111b9be8e59d9593a9e88374b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 01:20:26.981773   64625 start.go:369] acquired machines lock for "newest-cni-816754" in 41.542µs
	I1101 01:20:26.981798   64625 start.go:93] Provisioning new machine with config: &{Name:newest-cni-816754 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-816754 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenki
ns:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 01:20:26.981922   64625 start.go:125] createHost starting for "" (driver="kvm2")
	I1101 01:20:26.984486   64625 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1101 01:20:26.984675   64625 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:20:26.984734   64625 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:20:26.999310   64625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40233
	I1101 01:20:26.999922   64625 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:20:27.000559   64625 main.go:141] libmachine: Using API Version  1
	I1101 01:20:27.000633   64625 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:20:27.001131   64625 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:20:27.001331   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetMachineName
	I1101 01:20:27.001486   64625 main.go:141] libmachine: (newest-cni-816754) Calling .DriverName
	I1101 01:20:27.001663   64625 start.go:159] libmachine.API.Create for "newest-cni-816754" (driver="kvm2")
	I1101 01:20:27.001704   64625 client.go:168] LocalClient.Create starting
	I1101 01:20:27.001749   64625 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem
	I1101 01:20:27.001823   64625 main.go:141] libmachine: Decoding PEM data...
	I1101 01:20:27.001848   64625 main.go:141] libmachine: Parsing certificate...
	I1101 01:20:27.001921   64625 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem
	I1101 01:20:27.001951   64625 main.go:141] libmachine: Decoding PEM data...
	I1101 01:20:27.001968   64625 main.go:141] libmachine: Parsing certificate...
	I1101 01:20:27.001996   64625 main.go:141] libmachine: Running pre-create checks...
	I1101 01:20:27.002010   64625 main.go:141] libmachine: (newest-cni-816754) Calling .PreCreateCheck
	I1101 01:20:27.002505   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetConfigRaw
	I1101 01:20:27.003066   64625 main.go:141] libmachine: Creating machine...
	I1101 01:20:27.003087   64625 main.go:141] libmachine: (newest-cni-816754) Calling .Create
	I1101 01:20:27.003248   64625 main.go:141] libmachine: (newest-cni-816754) Creating KVM machine...
	I1101 01:20:27.005057   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found existing default KVM network
	I1101 01:20:27.006772   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:27.006629   64648 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000112c40}
	I1101 01:20:27.012945   64625 main.go:141] libmachine: (newest-cni-816754) DBG | trying to create private KVM network mk-newest-cni-816754 192.168.39.0/24...
	I1101 01:20:27.098635   64625 main.go:141] libmachine: (newest-cni-816754) DBG | private KVM network mk-newest-cni-816754 192.168.39.0/24 created
	I1101 01:20:27.098677   64625 main.go:141] libmachine: (newest-cni-816754) Setting up store path in /home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754 ...
	I1101 01:20:27.098714   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:27.098609   64648 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17486-7305/.minikube
	I1101 01:20:27.098753   64625 main.go:141] libmachine: (newest-cni-816754) Building disk image from file:///home/jenkins/minikube-integration/17486-7305/.minikube/cache/iso/amd64/minikube-v1.32.0-1698773592-17486-amd64.iso
	I1101 01:20:27.098787   64625 main.go:141] libmachine: (newest-cni-816754) Downloading /home/jenkins/minikube-integration/17486-7305/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17486-7305/.minikube/cache/iso/amd64/minikube-v1.32.0-1698773592-17486-amd64.iso...
	I1101 01:20:27.330302   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:27.330064   64648 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754/id_rsa...
	I1101 01:20:27.606617   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:27.606462   64648 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754/newest-cni-816754.rawdisk...
	I1101 01:20:27.606653   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Writing magic tar header
	I1101 01:20:27.606677   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Writing SSH key tar header
	I1101 01:20:27.606784   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:27.606706   64648 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754 ...
	I1101 01:20:27.606843   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754
	I1101 01:20:27.606862   64625 main.go:141] libmachine: (newest-cni-816754) Setting executable bit set on /home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754 (perms=drwx------)
	I1101 01:20:27.606872   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17486-7305/.minikube/machines
	I1101 01:20:27.606888   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17486-7305/.minikube
	I1101 01:20:27.606899   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17486-7305
	I1101 01:20:27.606926   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1101 01:20:27.606938   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Checking permissions on dir: /home/jenkins
	I1101 01:20:27.606955   64625 main.go:141] libmachine: (newest-cni-816754) Setting executable bit set on /home/jenkins/minikube-integration/17486-7305/.minikube/machines (perms=drwxr-xr-x)
	I1101 01:20:27.606966   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Checking permissions on dir: /home
	I1101 01:20:27.606983   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Skipping /home - not owner
	I1101 01:20:27.606996   64625 main.go:141] libmachine: (newest-cni-816754) Setting executable bit set on /home/jenkins/minikube-integration/17486-7305/.minikube (perms=drwxr-xr-x)
	I1101 01:20:27.607004   64625 main.go:141] libmachine: (newest-cni-816754) Setting executable bit set on /home/jenkins/minikube-integration/17486-7305 (perms=drwxrwxr-x)
	I1101 01:20:27.607017   64625 main.go:141] libmachine: (newest-cni-816754) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1101 01:20:27.607032   64625 main.go:141] libmachine: (newest-cni-816754) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1101 01:20:27.607050   64625 main.go:141] libmachine: (newest-cni-816754) Creating domain...
	I1101 01:20:27.608546   64625 main.go:141] libmachine: (newest-cni-816754) define libvirt domain using xml: 
	I1101 01:20:27.608579   64625 main.go:141] libmachine: (newest-cni-816754) <domain type='kvm'>
	I1101 01:20:27.608590   64625 main.go:141] libmachine: (newest-cni-816754)   <name>newest-cni-816754</name>
	I1101 01:20:27.608596   64625 main.go:141] libmachine: (newest-cni-816754)   <memory unit='MiB'>2200</memory>
	I1101 01:20:27.608602   64625 main.go:141] libmachine: (newest-cni-816754)   <vcpu>2</vcpu>
	I1101 01:20:27.608610   64625 main.go:141] libmachine: (newest-cni-816754)   <features>
	I1101 01:20:27.608619   64625 main.go:141] libmachine: (newest-cni-816754)     <acpi/>
	I1101 01:20:27.608632   64625 main.go:141] libmachine: (newest-cni-816754)     <apic/>
	I1101 01:20:27.608644   64625 main.go:141] libmachine: (newest-cni-816754)     <pae/>
	I1101 01:20:27.608655   64625 main.go:141] libmachine: (newest-cni-816754)     
	I1101 01:20:27.608668   64625 main.go:141] libmachine: (newest-cni-816754)   </features>
	I1101 01:20:27.608678   64625 main.go:141] libmachine: (newest-cni-816754)   <cpu mode='host-passthrough'>
	I1101 01:20:27.608705   64625 main.go:141] libmachine: (newest-cni-816754)   
	I1101 01:20:27.608725   64625 main.go:141] libmachine: (newest-cni-816754)   </cpu>
	I1101 01:20:27.608732   64625 main.go:141] libmachine: (newest-cni-816754)   <os>
	I1101 01:20:27.608751   64625 main.go:141] libmachine: (newest-cni-816754)     <type>hvm</type>
	I1101 01:20:27.608760   64625 main.go:141] libmachine: (newest-cni-816754)     <boot dev='cdrom'/>
	I1101 01:20:27.608766   64625 main.go:141] libmachine: (newest-cni-816754)     <boot dev='hd'/>
	I1101 01:20:27.608775   64625 main.go:141] libmachine: (newest-cni-816754)     <bootmenu enable='no'/>
	I1101 01:20:27.608780   64625 main.go:141] libmachine: (newest-cni-816754)   </os>
	I1101 01:20:27.608786   64625 main.go:141] libmachine: (newest-cni-816754)   <devices>
	I1101 01:20:27.608793   64625 main.go:141] libmachine: (newest-cni-816754)     <disk type='file' device='cdrom'>
	I1101 01:20:27.608805   64625 main.go:141] libmachine: (newest-cni-816754)       <source file='/home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754/boot2docker.iso'/>
	I1101 01:20:27.608817   64625 main.go:141] libmachine: (newest-cni-816754)       <target dev='hdc' bus='scsi'/>
	I1101 01:20:27.608828   64625 main.go:141] libmachine: (newest-cni-816754)       <readonly/>
	I1101 01:20:27.608838   64625 main.go:141] libmachine: (newest-cni-816754)     </disk>
	I1101 01:20:27.608865   64625 main.go:141] libmachine: (newest-cni-816754)     <disk type='file' device='disk'>
	I1101 01:20:27.608886   64625 main.go:141] libmachine: (newest-cni-816754)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1101 01:20:27.608904   64625 main.go:141] libmachine: (newest-cni-816754)       <source file='/home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754/newest-cni-816754.rawdisk'/>
	I1101 01:20:27.608918   64625 main.go:141] libmachine: (newest-cni-816754)       <target dev='hda' bus='virtio'/>
	I1101 01:20:27.608932   64625 main.go:141] libmachine: (newest-cni-816754)     </disk>
	I1101 01:20:27.608945   64625 main.go:141] libmachine: (newest-cni-816754)     <interface type='network'>
	I1101 01:20:27.608968   64625 main.go:141] libmachine: (newest-cni-816754)       <source network='mk-newest-cni-816754'/>
	I1101 01:20:27.608981   64625 main.go:141] libmachine: (newest-cni-816754)       <model type='virtio'/>
	I1101 01:20:27.608997   64625 main.go:141] libmachine: (newest-cni-816754)     </interface>
	I1101 01:20:27.609012   64625 main.go:141] libmachine: (newest-cni-816754)     <interface type='network'>
	I1101 01:20:27.609025   64625 main.go:141] libmachine: (newest-cni-816754)       <source network='default'/>
	I1101 01:20:27.609039   64625 main.go:141] libmachine: (newest-cni-816754)       <model type='virtio'/>
	I1101 01:20:27.609051   64625 main.go:141] libmachine: (newest-cni-816754)     </interface>
	I1101 01:20:27.609064   64625 main.go:141] libmachine: (newest-cni-816754)     <serial type='pty'>
	I1101 01:20:27.609073   64625 main.go:141] libmachine: (newest-cni-816754)       <target port='0'/>
	I1101 01:20:27.609081   64625 main.go:141] libmachine: (newest-cni-816754)     </serial>
	I1101 01:20:27.609098   64625 main.go:141] libmachine: (newest-cni-816754)     <console type='pty'>
	I1101 01:20:27.609113   64625 main.go:141] libmachine: (newest-cni-816754)       <target type='serial' port='0'/>
	I1101 01:20:27.609128   64625 main.go:141] libmachine: (newest-cni-816754)     </console>
	I1101 01:20:27.609142   64625 main.go:141] libmachine: (newest-cni-816754)     <rng model='virtio'>
	I1101 01:20:27.609153   64625 main.go:141] libmachine: (newest-cni-816754)       <backend model='random'>/dev/random</backend>
	I1101 01:20:27.609165   64625 main.go:141] libmachine: (newest-cni-816754)     </rng>
	I1101 01:20:27.609180   64625 main.go:141] libmachine: (newest-cni-816754)     
	I1101 01:20:27.609190   64625 main.go:141] libmachine: (newest-cni-816754)     
	I1101 01:20:27.609202   64625 main.go:141] libmachine: (newest-cni-816754)   </devices>
	I1101 01:20:27.609216   64625 main.go:141] libmachine: (newest-cni-816754) </domain>
	I1101 01:20:27.609227   64625 main.go:141] libmachine: (newest-cni-816754) 
	I1101 01:20:27.613657   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:b4:0b:bf in network default
	I1101 01:20:27.614304   64625 main.go:141] libmachine: (newest-cni-816754) Ensuring networks are active...
	I1101 01:20:27.614325   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:27.615098   64625 main.go:141] libmachine: (newest-cni-816754) Ensuring network default is active
	I1101 01:20:27.615425   64625 main.go:141] libmachine: (newest-cni-816754) Ensuring network mk-newest-cni-816754 is active
	I1101 01:20:27.615881   64625 main.go:141] libmachine: (newest-cni-816754) Getting domain xml...
	I1101 01:20:27.616713   64625 main.go:141] libmachine: (newest-cni-816754) Creating domain...
	I1101 01:20:28.945794   64625 main.go:141] libmachine: (newest-cni-816754) Waiting to get IP...
	I1101 01:20:28.946695   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:28.947110   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:28.947197   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:28.947105   64648 retry.go:31] will retry after 218.225741ms: waiting for machine to come up
	I1101 01:20:29.166699   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:29.167318   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:29.167352   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:29.167256   64648 retry.go:31] will retry after 390.036378ms: waiting for machine to come up
	I1101 01:20:29.558855   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:29.559354   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:29.559389   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:29.559285   64648 retry.go:31] will retry after 410.30945ms: waiting for machine to come up
	I1101 01:20:29.970656   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:29.971063   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:29.971101   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:29.971014   64648 retry.go:31] will retry after 545.455542ms: waiting for machine to come up
	I1101 01:20:30.517668   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:30.518337   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:30.518379   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:30.518285   64648 retry.go:31] will retry after 562.086808ms: waiting for machine to come up
	I1101 01:20:31.081578   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:31.082157   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:31.082205   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:31.082083   64648 retry.go:31] will retry after 744.834019ms: waiting for machine to come up
	I1101 01:20:31.829035   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:31.829593   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:31.829623   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:31.829545   64648 retry.go:31] will retry after 1.124156549s: waiting for machine to come up
	I1101 01:20:32.955229   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:32.955754   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:32.955776   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:32.955707   64648 retry.go:31] will retry after 945.262883ms: waiting for machine to come up
	I1101 01:20:33.903162   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:33.903604   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:33.903627   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:33.903574   64648 retry.go:31] will retry after 1.342633534s: waiting for machine to come up
	I1101 01:20:35.247780   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:35.248333   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:35.248370   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:35.248271   64648 retry.go:31] will retry after 1.717433966s: waiting for machine to come up
	I1101 01:20:36.967748   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:36.968301   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:36.968331   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:36.968243   64648 retry.go:31] will retry after 2.125257088s: waiting for machine to come up
	I1101 01:20:39.096241   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:39.096903   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:39.096930   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:39.096845   64648 retry.go:31] will retry after 3.120284679s: waiting for machine to come up
	I1101 01:20:42.218526   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:42.219010   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:42.219035   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:42.218966   64648 retry.go:31] will retry after 3.400004837s: waiting for machine to come up
	I1101 01:20:45.621833   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:45.622314   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:45.622342   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:45.622255   64648 retry.go:31] will retry after 4.340884931s: waiting for machine to come up
	I1101 01:20:49.966397   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:49.966885   64625 main.go:141] libmachine: (newest-cni-816754) Found IP for machine: 192.168.39.148
	I1101 01:20:49.966944   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has current primary IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:49.966966   64625 main.go:141] libmachine: (newest-cni-816754) Reserving static IP address...
	I1101 01:20:49.967354   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find host DHCP lease matching {name: "newest-cni-816754", mac: "52:54:00:e9:10:53", ip: "192.168.39.148"} in network mk-newest-cni-816754
	I1101 01:20:50.049507   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Getting to WaitForSSH function...
	I1101 01:20:50.049555   64625 main.go:141] libmachine: (newest-cni-816754) Reserved static IP address: 192.168.39.148
	I1101 01:20:50.049575   64625 main.go:141] libmachine: (newest-cni-816754) Waiting for SSH to be available...
	I1101 01:20:50.052593   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.053018   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:50.053066   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.053156   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Using SSH client type: external
	I1101 01:20:50.053178   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754/id_rsa (-rw-------)
	I1101 01:20:50.053215   64625 main.go:141] libmachine: (newest-cni-816754) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.148 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1101 01:20:50.053237   64625 main.go:141] libmachine: (newest-cni-816754) DBG | About to run SSH command:
	I1101 01:20:50.053250   64625 main.go:141] libmachine: (newest-cni-816754) DBG | exit 0
	I1101 01:20:50.143882   64625 main.go:141] libmachine: (newest-cni-816754) DBG | SSH cmd err, output: <nil>: 
	I1101 01:20:50.144170   64625 main.go:141] libmachine: (newest-cni-816754) KVM machine creation complete!
	I1101 01:20:50.144668   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetConfigRaw
	I1101 01:20:50.145249   64625 main.go:141] libmachine: (newest-cni-816754) Calling .DriverName
	I1101 01:20:50.145481   64625 main.go:141] libmachine: (newest-cni-816754) Calling .DriverName
	I1101 01:20:50.145669   64625 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1101 01:20:50.145685   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetState
	I1101 01:20:50.147056   64625 main.go:141] libmachine: Detecting operating system of created instance...
	I1101 01:20:50.147070   64625 main.go:141] libmachine: Waiting for SSH to be available...
	I1101 01:20:50.147077   64625 main.go:141] libmachine: Getting to WaitForSSH function...
	I1101 01:20:50.147083   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHHostname
	I1101 01:20:50.149699   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.150244   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:50.150269   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.150408   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHPort
	I1101 01:20:50.150591   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:50.150729   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:50.150859   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHUsername
	I1101 01:20:50.151028   64625 main.go:141] libmachine: Using SSH client type: native
	I1101 01:20:50.151445   64625 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I1101 01:20:50.151466   64625 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1101 01:20:50.267248   64625 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 01:20:50.267271   64625 main.go:141] libmachine: Detecting the provisioner...
	I1101 01:20:50.267280   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHHostname
	I1101 01:20:50.270067   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.270474   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:50.270509   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.270587   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHPort
	I1101 01:20:50.270746   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:50.270937   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:50.271089   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHUsername
	I1101 01:20:50.271265   64625 main.go:141] libmachine: Using SSH client type: native
	I1101 01:20:50.271607   64625 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I1101 01:20:50.271624   64625 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1101 01:20:50.388826   64625 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g0cee705-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1101 01:20:50.388908   64625 main.go:141] libmachine: found compatible host: buildroot
	I1101 01:20:50.388923   64625 main.go:141] libmachine: Provisioning with buildroot...
	I1101 01:20:50.388932   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetMachineName
	I1101 01:20:50.389214   64625 buildroot.go:166] provisioning hostname "newest-cni-816754"
	I1101 01:20:50.389241   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetMachineName
	I1101 01:20:50.389409   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHHostname
	I1101 01:20:50.392105   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.392490   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:50.392522   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.392627   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHPort
	I1101 01:20:50.392797   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:50.392983   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:50.393154   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHUsername
	I1101 01:20:50.393340   64625 main.go:141] libmachine: Using SSH client type: native
	I1101 01:20:50.393753   64625 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I1101 01:20:50.393771   64625 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-816754 && echo "newest-cni-816754" | sudo tee /etc/hostname
	I1101 01:20:50.524496   64625 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-816754
	
	I1101 01:20:50.524530   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHHostname
	I1101 01:20:50.527592   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.528072   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:50.528121   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.528367   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHPort
	I1101 01:20:50.528602   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:50.528794   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:50.529017   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHUsername
	I1101 01:20:50.529224   64625 main.go:141] libmachine: Using SSH client type: native
	I1101 01:20:50.529620   64625 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I1101 01:20:50.529646   64625 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-816754' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-816754/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-816754' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 01:20:50.657526   64625 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 01:20:50.657563   64625 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7305/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7305/.minikube}
	I1101 01:20:50.657588   64625 buildroot.go:174] setting up certificates
	I1101 01:20:50.657599   64625 provision.go:83] configureAuth start
	I1101 01:20:50.657618   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetMachineName
	I1101 01:20:50.657946   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetIP
	I1101 01:20:50.660675   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.660941   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:50.660970   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.661118   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHHostname
	I1101 01:20:50.663458   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.663801   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:50.663833   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.664020   64625 provision.go:138] copyHostCerts
	I1101 01:20:50.664082   64625 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem, removing ...
	I1101 01:20:50.664104   64625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 01:20:50.664183   64625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem (1078 bytes)
	I1101 01:20:50.664312   64625 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem, removing ...
	I1101 01:20:50.664323   64625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 01:20:50.664359   64625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem (1123 bytes)
	I1101 01:20:50.664466   64625 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem, removing ...
	I1101 01:20:50.664480   64625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 01:20:50.664525   64625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem (1675 bytes)
	I1101 01:20:50.664577   64625 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem org=jenkins.newest-cni-816754 san=[192.168.39.148 192.168.39.148 localhost 127.0.0.1 minikube newest-cni-816754]
	I1101 01:20:51.005619   64625 provision.go:172] copyRemoteCerts
	I1101 01:20:51.005678   64625 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 01:20:51.005708   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHHostname
	I1101 01:20:51.008884   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.009323   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:51.009359   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.009521   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHPort
	I1101 01:20:51.009749   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:51.009919   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHUsername
	I1101 01:20:51.010066   64625 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754/id_rsa Username:docker}
	I1101 01:20:51.101132   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 01:20:51.125444   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 01:20:51.148653   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1101 01:20:51.172340   64625 provision.go:86] duration metric: configureAuth took 514.718541ms
	I1101 01:20:51.172363   64625 buildroot.go:189] setting minikube options for container-runtime
	I1101 01:20:51.172712   64625 config.go:182] Loaded profile config "newest-cni-816754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:20:51.172819   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHHostname
	I1101 01:20:51.176208   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.176684   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:51.176721   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.176935   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHPort
	I1101 01:20:51.177176   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:51.177359   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:51.177516   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHUsername
	I1101 01:20:51.177725   64625 main.go:141] libmachine: Using SSH client type: native
	I1101 01:20:51.178105   64625 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I1101 01:20:51.178127   64625 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 01:20:51.483523   64625 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 01:20:51.483557   64625 main.go:141] libmachine: Checking connection to Docker...
	I1101 01:20:51.483589   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetURL
	I1101 01:20:51.484996   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Using libvirt version 6000000
	I1101 01:20:51.487079   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.487445   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:51.487479   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.487667   64625 main.go:141] libmachine: Docker is up and running!
	I1101 01:20:51.487682   64625 main.go:141] libmachine: Reticulating splines...
	I1101 01:20:51.487690   64625 client.go:171] LocalClient.Create took 24.485974012s
	I1101 01:20:51.487718   64625 start.go:167] duration metric: libmachine.API.Create for "newest-cni-816754" took 24.486056341s
	I1101 01:20:51.487735   64625 start.go:300] post-start starting for "newest-cni-816754" (driver="kvm2")
	I1101 01:20:51.487751   64625 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 01:20:51.487775   64625 main.go:141] libmachine: (newest-cni-816754) Calling .DriverName
	I1101 01:20:51.488081   64625 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 01:20:51.488105   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHHostname
	I1101 01:20:51.490270   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.490622   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:51.490644   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.490743   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHPort
	I1101 01:20:51.490946   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:51.491112   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHUsername
	I1101 01:20:51.491250   64625 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754/id_rsa Username:docker}
	I1101 01:20:51.577564   64625 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 01:20:51.582421   64625 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 01:20:51.582451   64625 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/addons for local assets ...
	I1101 01:20:51.582522   64625 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/files for local assets ...
	I1101 01:20:51.582624   64625 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> 145042.pem in /etc/ssl/certs
	I1101 01:20:51.582715   64625 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 01:20:51.591399   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:20:51.614218   64625 start.go:303] post-start completed in 126.467274ms
	I1101 01:20:51.614257   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetConfigRaw
	I1101 01:20:51.614794   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetIP
	I1101 01:20:51.617335   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.617843   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:51.617874   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.618197   64625 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/config.json ...
	I1101 01:20:51.618407   64625 start.go:128] duration metric: createHost completed in 24.636474986s
	I1101 01:20:51.618431   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHHostname
	I1101 01:20:51.621008   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.621378   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:51.621410   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.621543   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHPort
	I1101 01:20:51.621766   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:51.621964   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:51.622142   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHUsername
	I1101 01:20:51.622347   64625 main.go:141] libmachine: Using SSH client type: native
	I1101 01:20:51.622677   64625 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I1101 01:20:51.622691   64625 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1101 01:20:51.740752   64625 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698801651.712516909
	
	I1101 01:20:51.740784   64625 fix.go:206] guest clock: 1698801651.712516909
	I1101 01:20:51.740793   64625 fix.go:219] Guest: 2023-11-01 01:20:51.712516909 +0000 UTC Remote: 2023-11-01 01:20:51.618418585 +0000 UTC m=+24.769563112 (delta=94.098324ms)
	I1101 01:20:51.740821   64625 fix.go:190] guest clock delta is within tolerance: 94.098324ms
	I1101 01:20:51.740830   64625 start.go:83] releasing machines lock for "newest-cni-816754", held for 24.759043949s
	I1101 01:20:51.740859   64625 main.go:141] libmachine: (newest-cni-816754) Calling .DriverName
	I1101 01:20:51.741149   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetIP
	I1101 01:20:51.743857   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.744261   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:51.744295   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.744431   64625 main.go:141] libmachine: (newest-cni-816754) Calling .DriverName
	I1101 01:20:51.744972   64625 main.go:141] libmachine: (newest-cni-816754) Calling .DriverName
	I1101 01:20:51.745171   64625 main.go:141] libmachine: (newest-cni-816754) Calling .DriverName
	I1101 01:20:51.745262   64625 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 01:20:51.745342   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHHostname
	I1101 01:20:51.745424   64625 ssh_runner.go:195] Run: cat /version.json
	I1101 01:20:51.745451   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHHostname
	I1101 01:20:51.748196   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.748256   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.748573   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:51.748610   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.748646   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:51.748669   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.748736   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHPort
	I1101 01:20:51.748834   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHPort
	I1101 01:20:51.748922   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:51.748981   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:51.749071   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHUsername
	I1101 01:20:51.749139   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHUsername
	I1101 01:20:51.749206   64625 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754/id_rsa Username:docker}
	I1101 01:20:51.749252   64625 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754/id_rsa Username:docker}
	I1101 01:20:51.867866   64625 ssh_runner.go:195] Run: systemctl --version
	I1101 01:20:51.873719   64625 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 01:20:52.035278   64625 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 01:20:52.041189   64625 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 01:20:52.041272   64625 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 01:20:52.056896   64625 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 01:20:52.056923   64625 start.go:472] detecting cgroup driver to use...
	I1101 01:20:52.056988   64625 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 01:20:52.070654   64625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 01:20:52.083147   64625 docker.go:204] disabling cri-docker service (if available) ...
	I1101 01:20:52.083220   64625 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 01:20:52.095798   64625 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 01:20:52.108444   64625 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 01:20:52.224741   64625 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 01:20:52.347981   64625 docker.go:220] disabling docker service ...
	I1101 01:20:52.348055   64625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 01:20:52.361496   64625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 01:20:52.373422   64625 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 01:20:52.491564   64625 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 01:20:52.602648   64625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 01:20:52.614781   64625 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 01:20:52.632225   64625 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 01:20:52.632289   64625 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:20:52.642604   64625 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 01:20:52.642661   64625 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:20:52.652069   64625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:20:52.661998   64625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:20:52.671552   64625 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 01:20:52.682867   64625 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 01:20:52.691926   64625 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 01:20:52.692008   64625 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 01:20:52.704079   64625 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 01:20:52.713491   64625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 01:20:52.839278   64625 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 01:20:53.013907   64625 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 01:20:53.013976   64625 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 01:20:53.019384   64625 start.go:540] Will wait 60s for crictl version
	I1101 01:20:53.019445   64625 ssh_runner.go:195] Run: which crictl
	I1101 01:20:53.023197   64625 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 01:20:53.061965   64625 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1101 01:20:53.062085   64625 ssh_runner.go:195] Run: crio --version
	I1101 01:20:53.108002   64625 ssh_runner.go:195] Run: crio --version
	I1101 01:20:53.158652   64625 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1101 01:20:53.160356   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetIP
	I1101 01:20:53.163312   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:53.163744   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:53.163790   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:53.164054   64625 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1101 01:20:53.168313   64625 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:20:53.181782   64625 localpath.go:92] copying /home/jenkins/minikube-integration/17486-7305/.minikube/client.crt -> /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/client.crt
	I1101 01:20:53.181942   64625 localpath.go:117] copying /home/jenkins/minikube-integration/17486-7305/.minikube/client.key -> /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/client.key
	I1101 01:20:53.184256   64625 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1101 01:20:53.185935   64625 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 01:20:53.186019   64625 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:20:53.221394   64625 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1101 01:20:53.221464   64625 ssh_runner.go:195] Run: which lz4
	I1101 01:20:53.225430   64625 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1101 01:20:53.229572   64625 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 01:20:53.229619   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1101 01:20:55.078513   64625 crio.go:444] Took 1.853113 seconds to copy over tarball
	I1101 01:20:55.078590   64625 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 01:20:58.039163   64625 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.960551837s)
	I1101 01:20:58.039190   64625 crio.go:451] Took 2.960647 seconds to extract the tarball
	I1101 01:20:58.039201   64625 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 01:20:58.082032   64625 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:20:58.154466   64625 crio.go:496] all images are preloaded for cri-o runtime.
	I1101 01:20:58.154490   64625 cache_images.go:84] Images are preloaded, skipping loading
	I1101 01:20:58.154555   64625 ssh_runner.go:195] Run: crio config
	I1101 01:20:58.229389   64625 cni.go:84] Creating CNI manager for ""
	I1101 01:20:58.229423   64625 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:20:58.229447   64625 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I1101 01:20:58.229485   64625 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.148 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-816754 NodeName:newest-cni-816754 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.148"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.39.148 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 01:20:58.229684   64625 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.148
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-816754"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.148
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.148"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 01:20:58.229811   64625 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-816754 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.148
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:newest-cni-816754 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 01:20:58.229888   64625 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 01:20:58.242338   64625 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 01:20:58.242413   64625 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 01:20:58.252625   64625 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (414 bytes)
	I1101 01:20:58.271959   64625 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 01:20:58.290584   64625 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1101 01:20:58.309005   64625 ssh_runner.go:195] Run: grep 192.168.39.148	control-plane.minikube.internal$ /etc/hosts
	I1101 01:20:58.313280   64625 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.148	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:20:58.325979   64625 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754 for IP: 192.168.39.148
	I1101 01:20:58.326024   64625 certs.go:190] acquiring lock for shared ca certs: {Name:mk6185b3938c4b51e80f57b7c81926c81b632b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:20:58.326204   64625 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key
	I1101 01:20:58.326246   64625 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key
	I1101 01:20:58.326329   64625 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/client.key
	I1101 01:20:58.326352   64625 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/apiserver.key.b8daa033
	I1101 01:20:58.326362   64625 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/apiserver.crt.b8daa033 with IP's: [192.168.39.148 10.96.0.1 127.0.0.1 10.0.0.1]
	I1101 01:20:58.427110   64625 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/apiserver.crt.b8daa033 ...
	I1101 01:20:58.427140   64625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/apiserver.crt.b8daa033: {Name:mk3f8c141290c3a65392487e79efcc8078b29342 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:20:58.427342   64625 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/apiserver.key.b8daa033 ...
	I1101 01:20:58.427358   64625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/apiserver.key.b8daa033: {Name:mk51785e712809e4c053079f222fcaf26d1cb6b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:20:58.427483   64625 certs.go:337] copying /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/apiserver.crt.b8daa033 -> /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/apiserver.crt
	I1101 01:20:58.427575   64625 certs.go:341] copying /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/apiserver.key.b8daa033 -> /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/apiserver.key
	I1101 01:20:58.427646   64625 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/proxy-client.key
	I1101 01:20:58.427668   64625 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/proxy-client.crt with IP's: []
	I1101 01:20:58.706887   64625 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/proxy-client.crt ...
	I1101 01:20:58.706917   64625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/proxy-client.crt: {Name:mkba323c47c990603b5078f2d8326413583ed649 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:20:58.707094   64625 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/proxy-client.key ...
	I1101 01:20:58.707115   64625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/proxy-client.key: {Name:mk640a807fa21c954ca16b3fd0849059bca2a284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:20:58.707364   64625 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem (1338 bytes)
	W1101 01:20:58.707404   64625 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504_empty.pem, impossibly tiny 0 bytes
	I1101 01:20:58.707415   64625 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 01:20:58.707435   64625 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem (1078 bytes)
	I1101 01:20:58.707464   64625 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem (1123 bytes)
	I1101 01:20:58.707485   64625 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem (1675 bytes)
	I1101 01:20:58.707533   64625 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:20:58.708143   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 01:20:58.734111   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 01:20:58.762510   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 01:20:58.788616   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 01:20:58.813676   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 01:20:58.839628   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 01:20:58.865730   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 01:20:58.892226   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 01:20:58.917779   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem --> /usr/share/ca-certificates/14504.pem (1338 bytes)
	I1101 01:20:58.942571   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /usr/share/ca-certificates/145042.pem (1708 bytes)
	I1101 01:20:58.965958   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 01:20:58.990159   64625 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 01:20:59.008835   64625 ssh_runner.go:195] Run: openssl version
	I1101 01:20:59.015199   64625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14504.pem && ln -fs /usr/share/ca-certificates/14504.pem /etc/ssl/certs/14504.pem"
	I1101 01:20:59.025730   64625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14504.pem
	I1101 01:20:59.030514   64625 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:54 /usr/share/ca-certificates/14504.pem
	I1101 01:20:59.030585   64625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14504.pem
	I1101 01:20:59.036535   64625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14504.pem /etc/ssl/certs/51391683.0"
	I1101 01:20:59.047853   64625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145042.pem && ln -fs /usr/share/ca-certificates/145042.pem /etc/ssl/certs/145042.pem"
	I1101 01:20:59.058620   64625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145042.pem
	I1101 01:20:59.063296   64625 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:54 /usr/share/ca-certificates/145042.pem
	I1101 01:20:59.063369   64625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145042.pem
	I1101 01:20:59.069054   64625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145042.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 01:20:59.078653   64625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 01:20:59.089081   64625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:20:59.094030   64625 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:20:59.094097   64625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:20:59.099890   64625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 01:20:59.110687   64625 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 01:20:59.115284   64625 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1101 01:20:59.115337   64625 kubeadm.go:404] StartCluster: {Name:newest-cni-816754 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:newest-cni-816754 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.148 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mini
kube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 01:20:59.115439   64625 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 01:20:59.115505   64625 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:20:59.159819   64625 cri.go:89] found id: ""
	I1101 01:20:59.159974   64625 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 01:20:59.169450   64625 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:20:59.178882   64625 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:20:59.188155   64625 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:20:59.188209   64625 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1101 01:20:59.590963   64625 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 01:21:11.988833   64625 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1101 01:21:11.988901   64625 kubeadm.go:322] [preflight] Running pre-flight checks
	I1101 01:21:11.988999   64625 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 01:21:11.989108   64625 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 01:21:11.989223   64625 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 01:21:11.989318   64625 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 01:21:11.991308   64625 out.go:204]   - Generating certificates and keys ...
	I1101 01:21:11.991399   64625 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1101 01:21:11.991486   64625 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1101 01:21:11.991579   64625 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 01:21:11.991647   64625 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1101 01:21:11.991731   64625 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1101 01:21:11.991800   64625 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1101 01:21:11.991874   64625 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1101 01:21:11.992064   64625 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-816754] and IPs [192.168.39.148 127.0.0.1 ::1]
	I1101 01:21:11.992150   64625 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1101 01:21:11.992333   64625 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-816754] and IPs [192.168.39.148 127.0.0.1 ::1]
	I1101 01:21:11.992441   64625 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 01:21:11.992522   64625 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 01:21:11.992591   64625 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1101 01:21:11.992671   64625 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 01:21:11.992758   64625 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 01:21:11.992809   64625 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 01:21:11.992863   64625 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 01:21:11.992917   64625 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 01:21:11.993023   64625 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 01:21:11.993118   64625 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 01:21:11.995043   64625 out.go:204]   - Booting up control plane ...
	I1101 01:21:11.995169   64625 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 01:21:11.995282   64625 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 01:21:11.995372   64625 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 01:21:11.995565   64625 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 01:21:11.995756   64625 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 01:21:11.995823   64625 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1101 01:21:11.996088   64625 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 01:21:11.996194   64625 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504715 seconds
	I1101 01:21:11.996313   64625 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 01:21:11.996480   64625 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 01:21:11.996576   64625 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 01:21:11.996823   64625 kubeadm.go:322] [mark-control-plane] Marking the node newest-cni-816754 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 01:21:11.996909   64625 kubeadm.go:322] [bootstrap-token] Using token: k5qo7m.j8zm1wwr1uavtb5c
	I1101 01:21:11.998544   64625 out.go:204]   - Configuring RBAC rules ...
	I1101 01:21:11.998694   64625 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 01:21:11.998823   64625 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 01:21:11.999017   64625 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 01:21:11.999185   64625 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 01:21:11.999312   64625 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 01:21:11.999422   64625 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 01:21:11.999589   64625 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 01:21:11.999663   64625 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1101 01:21:11.999730   64625 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1101 01:21:11.999741   64625 kubeadm.go:322] 
	I1101 01:21:11.999818   64625 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1101 01:21:11.999827   64625 kubeadm.go:322] 
	I1101 01:21:11.999943   64625 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1101 01:21:11.999955   64625 kubeadm.go:322] 
	I1101 01:21:11.999995   64625 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1101 01:21:12.000084   64625 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 01:21:12.000152   64625 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 01:21:12.000161   64625 kubeadm.go:322] 
	I1101 01:21:12.000241   64625 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1101 01:21:12.000307   64625 kubeadm.go:322] 
	I1101 01:21:12.000449   64625 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 01:21:12.000464   64625 kubeadm.go:322] 
	I1101 01:21:12.000532   64625 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1101 01:21:12.000641   64625 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 01:21:12.000736   64625 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 01:21:12.000750   64625 kubeadm.go:322] 
	I1101 01:21:12.000868   64625 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 01:21:12.000984   64625 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1101 01:21:12.000995   64625 kubeadm.go:322] 
	I1101 01:21:12.001102   64625 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token k5qo7m.j8zm1wwr1uavtb5c \
	I1101 01:21:12.001243   64625 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 \
	I1101 01:21:12.001281   64625 kubeadm.go:322] 	--control-plane 
	I1101 01:21:12.001287   64625 kubeadm.go:322] 
	I1101 01:21:12.001394   64625 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1101 01:21:12.001411   64625 kubeadm.go:322] 
	I1101 01:21:12.001512   64625 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token k5qo7m.j8zm1wwr1uavtb5c \
	I1101 01:21:12.001663   64625 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 
	I1101 01:21:12.001683   64625 cni.go:84] Creating CNI manager for ""
	I1101 01:21:12.001693   64625 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:21:12.003603   64625 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:21:12.005194   64625 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:21:12.072709   64625 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:21:12.130203   64625 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 01:21:12.130270   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:12.130285   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9 minikube.k8s.io/name=newest-cni-816754 minikube.k8s.io/updated_at=2023_11_01T01_21_12_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:12.199829   64625 ops.go:34] apiserver oom_adj: -16
	I1101 01:21:12.420526   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:12.515267   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:13.120175   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:13.619679   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:14.120479   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:14.620503   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:15.120335   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:15.620057   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:16.119750   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:16.620284   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:17.119768   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:17.619605   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:18.119966   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:18.620188   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:19.120584   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:19.620170   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:20.120366   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:20.619569   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:21.119776   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:21.619729   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:22.120460   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:22.620567   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:23.119747   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:23.620355   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:24.119545   64625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:21:24.258378   64625 kubeadm.go:1081] duration metric: took 12.128163526s to wait for elevateKubeSystemPrivileges.
	I1101 01:21:24.258408   64625 kubeadm.go:406] StartCluster complete in 25.143076229s
	I1101 01:21:24.258431   64625 settings.go:142] acquiring lock: {Name:mk7f269e64dfd8d176737f993e01f6e6badafbc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:21:24.258527   64625 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 01:21:24.260239   64625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/kubeconfig: {Name:mk08da65b6c71084e1cfafb19800038e8c8303e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:21:24.260511   64625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 01:21:24.260626   64625 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1101 01:21:24.260717   64625 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-816754"
	I1101 01:21:24.260739   64625 addons.go:231] Setting addon storage-provisioner=true in "newest-cni-816754"
	I1101 01:21:24.260742   64625 config.go:182] Loaded profile config "newest-cni-816754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:21:24.260750   64625 addons.go:69] Setting default-storageclass=true in profile "newest-cni-816754"
	I1101 01:21:24.260775   64625 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-816754"
	I1101 01:21:24.260800   64625 host.go:66] Checking if "newest-cni-816754" exists ...
	I1101 01:21:24.261163   64625 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:21:24.261193   64625 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:21:24.261262   64625 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:21:24.261320   64625 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:21:24.277363   64625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43081
	I1101 01:21:24.277669   64625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42981
	I1101 01:21:24.277828   64625 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:21:24.278109   64625 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:21:24.278334   64625 main.go:141] libmachine: Using API Version  1
	I1101 01:21:24.278361   64625 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:21:24.278602   64625 main.go:141] libmachine: Using API Version  1
	I1101 01:21:24.278628   64625 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:21:24.278711   64625 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:21:24.278982   64625 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:21:24.279172   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetState
	I1101 01:21:24.279289   64625 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:21:24.279317   64625 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:21:24.282651   64625 addons.go:231] Setting addon default-storageclass=true in "newest-cni-816754"
	I1101 01:21:24.282697   64625 host.go:66] Checking if "newest-cni-816754" exists ...
	I1101 01:21:24.283032   64625 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:21:24.283080   64625 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:21:24.295415   64625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39813
	I1101 01:21:24.295903   64625 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:21:24.296366   64625 main.go:141] libmachine: Using API Version  1
	I1101 01:21:24.296395   64625 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:21:24.296711   64625 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:21:24.296913   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetState
	I1101 01:21:24.298845   64625 main.go:141] libmachine: (newest-cni-816754) Calling .DriverName
	I1101 01:21:24.300805   64625 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:21:24.300219   64625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35859
	I1101 01:21:24.302287   64625 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:21:24.302303   64625 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 01:21:24.302323   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHHostname
	I1101 01:21:24.302881   64625 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:21:24.303353   64625 main.go:141] libmachine: Using API Version  1
	I1101 01:21:24.303373   64625 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:21:24.303749   64625 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:21:24.304951   64625 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:21:24.305016   64625 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:21:24.306196   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:21:24.308728   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHPort
	I1101 01:21:24.308797   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:21:24.308821   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:21:24.308893   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:21:24.309103   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHUsername
	I1101 01:21:24.309211   64625 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754/id_rsa Username:docker}
	I1101 01:21:24.320232   64625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41773
	I1101 01:21:24.320661   64625 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:21:24.321175   64625 main.go:141] libmachine: Using API Version  1
	I1101 01:21:24.321202   64625 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:21:24.321495   64625 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:21:24.321747   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetState
	I1101 01:21:24.323350   64625 main.go:141] libmachine: (newest-cni-816754) Calling .DriverName
	I1101 01:21:24.323631   64625 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 01:21:24.323649   64625 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 01:21:24.323666   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHHostname
	I1101 01:21:24.326486   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:21:24.326888   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:21:24.326905   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:21:24.327077   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHPort
	I1101 01:21:24.327242   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:21:24.327364   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHUsername
	I1101 01:21:24.327490   64625 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754/id_rsa Username:docker}
	I1101 01:21:24.365496   64625 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-816754" context rescaled to 1 replicas
	I1101 01:21:24.365535   64625 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.148 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 01:21:24.367797   64625 out.go:177] * Verifying Kubernetes components...
	I1101 01:21:24.369087   64625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:21:24.487575   64625 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:21:24.509481   64625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 01:21:24.510739   64625 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:21:24.510797   64625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:21:24.532021   64625 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 01:21:26.175134   64625 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.687510841s)
	I1101 01:21:26.175178   64625 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.664368135s)
	I1101 01:21:26.175189   64625 main.go:141] libmachine: Making call to close driver server
	I1101 01:21:26.175194   64625 api_server.go:72] duration metric: took 1.809638509s to wait for apiserver process to appear ...
	I1101 01:21:26.175200   64625 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:21:26.175203   64625 main.go:141] libmachine: (newest-cni-816754) Calling .Close
	I1101 01:21:26.175212   64625 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I1101 01:21:26.175147   64625 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.665628435s)
	I1101 01:21:26.175279   64625 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1101 01:21:26.175355   64625 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.643304616s)
	I1101 01:21:26.175395   64625 main.go:141] libmachine: Making call to close driver server
	I1101 01:21:26.175410   64625 main.go:141] libmachine: (newest-cni-816754) Calling .Close
	I1101 01:21:26.175519   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Closing plugin on server side
	I1101 01:21:26.175541   64625 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:21:26.175556   64625 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:21:26.175569   64625 main.go:141] libmachine: Making call to close driver server
	I1101 01:21:26.175577   64625 main.go:141] libmachine: (newest-cni-816754) Calling .Close
	I1101 01:21:26.175652   64625 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:21:26.175689   64625 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:21:26.175719   64625 main.go:141] libmachine: Making call to close driver server
	I1101 01:21:26.175742   64625 main.go:141] libmachine: (newest-cni-816754) Calling .Close
	I1101 01:21:26.176270   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Closing plugin on server side
	I1101 01:21:26.176291   64625 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:21:26.176306   64625 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:21:26.176309   64625 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:21:26.176312   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Closing plugin on server side
	I1101 01:21:26.176327   64625 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:21:26.185699   64625 api_server.go:279] https://192.168.39.148:8443/healthz returned 200:
	ok
	I1101 01:21:26.192750   64625 api_server.go:141] control plane version: v1.28.3
	I1101 01:21:26.192775   64625 api_server.go:131] duration metric: took 17.570377ms to wait for apiserver health ...
	I1101 01:21:26.192783   64625 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:21:26.200814   64625 main.go:141] libmachine: Making call to close driver server
	I1101 01:21:26.200837   64625 main.go:141] libmachine: (newest-cni-816754) Calling .Close
	I1101 01:21:26.201127   64625 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:21:26.201146   64625 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:21:26.201162   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Closing plugin on server side
	I1101 01:21:26.203187   64625 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1101 01:21:26.204870   64625 addons.go:502] enable addons completed in 1.944241081s: enabled=[storage-provisioner default-storageclass]
	I1101 01:21:26.207399   64625 system_pods.go:59] 8 kube-system pods found
	I1101 01:21:26.207437   64625 system_pods.go:61] "coredns-5dd5756b68-2v29v" [1af9d35f-627b-46a0-8d7b-f970cb448084] Failed / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 01:21:26.207446   64625 system_pods.go:61] "coredns-5dd5756b68-pjc72" [0b1337e1-3343-48cf-b3cc-7dccd56ef81f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 01:21:26.207451   64625 system_pods.go:61] "etcd-newest-cni-816754" [30549252-692f-44bb-8392-1a8e6cc2685b] Running
	I1101 01:21:26.207457   64625 system_pods.go:61] "kube-apiserver-newest-cni-816754" [a4cf18d6-3524-47fb-a3ea-a475e569e48b] Running
	I1101 01:21:26.207462   64625 system_pods.go:61] "kube-controller-manager-newest-cni-816754" [fb7fd047-d2f5-4d1f-8086-ccea3cd6c459] Running
	I1101 01:21:26.207470   64625 system_pods.go:61] "kube-proxy-xxn8q" [1bcb7d64-dfbb-43be-8256-55985e4a40ed] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 01:21:26.207477   64625 system_pods.go:61] "kube-scheduler-newest-cni-816754" [9cff66a2-d34d-4575-aa54-1e43d076e3c9] Running
	I1101 01:21:26.207489   64625 system_pods.go:61] "storage-provisioner" [ec5f4b42-5ff3-4a75-a37a-a41758482954] Pending
	I1101 01:21:26.207496   64625 system_pods.go:74] duration metric: took 14.707373ms to wait for pod list to return data ...
	I1101 01:21:26.207517   64625 default_sa.go:34] waiting for default service account to be created ...
	I1101 01:21:26.216725   64625 default_sa.go:45] found service account: "default"
	I1101 01:21:26.216754   64625 default_sa.go:55] duration metric: took 9.230149ms for default service account to be created ...
	I1101 01:21:26.216763   64625 kubeadm.go:581] duration metric: took 1.851207788s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I1101 01:21:26.216784   64625 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:21:26.222848   64625 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:21:26.222890   64625 node_conditions.go:123] node cpu capacity is 2
	I1101 01:21:26.222906   64625 node_conditions.go:105] duration metric: took 6.11587ms to run NodePressure ...
	I1101 01:21:26.222920   64625 start.go:228] waiting for startup goroutines ...
	I1101 01:21:26.222929   64625 start.go:233] waiting for cluster config update ...
	I1101 01:21:26.222942   64625 start.go:242] writing updated cluster config ...
	I1101 01:21:26.223289   64625 ssh_runner.go:195] Run: rm -f paused
	I1101 01:21:26.286087   64625 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1101 01:21:26.287849   64625 out.go:177] * Done! kubectl is now configured to use "newest-cni-816754" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-11-01 01:00:47 UTC, ends at Wed 2023-11-01 01:21:30 UTC. --
	Nov 01 01:21:30 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:21:30.054322937Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698801690054306396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=bf46faf9-0e9b-47e9-b04e-c973c4953dee name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:21:30 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:21:30.055345154Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e9eef92f-4ec0-4626-9a90-0565b84c0467 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:21:30 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:21:30.055415370Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e9eef92f-4ec0-4626-9a90-0565b84c0467 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:21:30 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:21:30.055688240Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a85b8e74173dbc34ea106aa829e909bb3fdc9fc0aa01d5f03beec385939e885,PodSandboxId:af1db69584833a352404ac369d09504166c678f9aa4b89facb0dd0607707cc23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698800773059095055,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaba9583-e564-4804-9cd3-2b4de36c85da,},Annotations:map[string]string{io.kubernetes.container.hash: ac50747,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8745454c3ba65b39dcca4ec859db6eec9bde20e8655cef3187db49575282aa10,PodSandboxId:cfe8082e2628799d9efe7b672e81ddcad90a99dd001281bd3e01c9e33fb9b901,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698800772252049052,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kzgzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d59980-f28a-482c-9aa8-8502915417f0,},Annotations:map[string]string{io.kubernetes.container.hash: a7c19628,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22be33464e2b616389f8c1c9fe097420418464330ebba5269746922fb0dead46,PodSandboxId:7fbb88518b5739fdad0ad3c9ab7d26f2a104dc851ad0c5a93651276faa04d55a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698800771247642609,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rgzt8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d136c6a-e0b2-44c3-a17b-85649d6ff7b7,},Annotations:map[string]string{io.kubernetes.container.hash: c0e462ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f060c7640be39d364fe8967ac8f38f7e607548707a374220ef0feb1305678cf3,PodSandboxId:d5a7315324b17f0871c73c4759bac5ae2592a914739929d59ec9a6545d9acf35,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698800747606688658,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-639310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48958f49a129074eec
3f767ffb1dddd1,},Annotations:map[string]string{io.kubernetes.container.hash: 7ec6a6b6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:178b076f51f7e8b659548480b9d8ff724f062bfec5c6ec0c3084b6d182210a51,PodSandboxId:b24c09745f83dc0eb98666502bab147baf943158dfd4f937ac3eff1a6e79f77c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698800747007400537,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-639310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 758e58c461773c5e0d
a7f3fa9c9b2628,},Annotations:map[string]string{io.kubernetes.container.hash: c78fc5b1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa1d8001c088acee185e9dad86cadfffdb1d5d2b62e785ec1ecd9cf0628faa80,PodSandboxId:8ec5d4b8331c34bf93d80dd0902a599768985a0fa2db30d361a1603fbe6958dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698800746803011038,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-639310,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 22af6e3d9158739e028e940aca1196e5,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203adcd67c53b37280b1e4ca576ca64ec2acf717740fea98d8ab311db9f57ed3,PodSandboxId:4b46efeec38d55eccf4d2a8220af4bfeb16484d377e7133b99afc539a4f7659c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698800746611905028,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-639310,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 76224889b8a2b452d7f7b1ab03f60615,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e9eef92f-4ec0-4626-9a90-0565b84c0467 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:21:30 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:21:30.097967303Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=620ea250-7273-4b71-aced-8ac9ecca7de5 name=/runtime.v1.RuntimeService/Version
	Nov 01 01:21:30 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:21:30.098082701Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=620ea250-7273-4b71-aced-8ac9ecca7de5 name=/runtime.v1.RuntimeService/Version
	Nov 01 01:21:30 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:21:30.099320877Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6a4d8e9a-1532-43dd-87e7-798249010e2e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:21:30 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:21:30.099822522Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698801690099806635,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=6a4d8e9a-1532-43dd-87e7-798249010e2e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:21:30 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:21:30.100488678Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a0ac6843-2684-480a-8583-d8650ff33f17 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:21:30 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:21:30.100563945Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a0ac6843-2684-480a-8583-d8650ff33f17 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:21:30 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:21:30.100733480Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a85b8e74173dbc34ea106aa829e909bb3fdc9fc0aa01d5f03beec385939e885,PodSandboxId:af1db69584833a352404ac369d09504166c678f9aa4b89facb0dd0607707cc23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698800773059095055,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaba9583-e564-4804-9cd3-2b4de36c85da,},Annotations:map[string]string{io.kubernetes.container.hash: ac50747,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8745454c3ba65b39dcca4ec859db6eec9bde20e8655cef3187db49575282aa10,PodSandboxId:cfe8082e2628799d9efe7b672e81ddcad90a99dd001281bd3e01c9e33fb9b901,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698800772252049052,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kzgzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d59980-f28a-482c-9aa8-8502915417f0,},Annotations:map[string]string{io.kubernetes.container.hash: a7c19628,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22be33464e2b616389f8c1c9fe097420418464330ebba5269746922fb0dead46,PodSandboxId:7fbb88518b5739fdad0ad3c9ab7d26f2a104dc851ad0c5a93651276faa04d55a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698800771247642609,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rgzt8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d136c6a-e0b2-44c3-a17b-85649d6ff7b7,},Annotations:map[string]string{io.kubernetes.container.hash: c0e462ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f060c7640be39d364fe8967ac8f38f7e607548707a374220ef0feb1305678cf3,PodSandboxId:d5a7315324b17f0871c73c4759bac5ae2592a914739929d59ec9a6545d9acf35,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698800747606688658,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-639310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48958f49a129074eec
3f767ffb1dddd1,},Annotations:map[string]string{io.kubernetes.container.hash: 7ec6a6b6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:178b076f51f7e8b659548480b9d8ff724f062bfec5c6ec0c3084b6d182210a51,PodSandboxId:b24c09745f83dc0eb98666502bab147baf943158dfd4f937ac3eff1a6e79f77c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698800747007400537,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-639310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 758e58c461773c5e0d
a7f3fa9c9b2628,},Annotations:map[string]string{io.kubernetes.container.hash: c78fc5b1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa1d8001c088acee185e9dad86cadfffdb1d5d2b62e785ec1ecd9cf0628faa80,PodSandboxId:8ec5d4b8331c34bf93d80dd0902a599768985a0fa2db30d361a1603fbe6958dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698800746803011038,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-639310,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 22af6e3d9158739e028e940aca1196e5,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203adcd67c53b37280b1e4ca576ca64ec2acf717740fea98d8ab311db9f57ed3,PodSandboxId:4b46efeec38d55eccf4d2a8220af4bfeb16484d377e7133b99afc539a4f7659c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698800746611905028,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-639310,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 76224889b8a2b452d7f7b1ab03f60615,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a0ac6843-2684-480a-8583-d8650ff33f17 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:21:30 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:21:30.138896235Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=2834dafd-2c7a-4345-ac20-5b0b183d8a14 name=/runtime.v1.RuntimeService/Version
	Nov 01 01:21:30 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:21:30.138982893Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=2834dafd-2c7a-4345-ac20-5b0b183d8a14 name=/runtime.v1.RuntimeService/Version
	Nov 01 01:21:30 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:21:30.140315662Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=53258ce1-ed52-4b14-a1d6-01936acd038f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:21:30 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:21:30.140769255Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698801690140755390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=53258ce1-ed52-4b14-a1d6-01936acd038f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:21:30 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:21:30.141546024Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7ded45f3-db80-4b43-905e-d21fb67db3df name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:21:30 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:21:30.141615115Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7ded45f3-db80-4b43-905e-d21fb67db3df name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:21:30 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:21:30.141792891Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a85b8e74173dbc34ea106aa829e909bb3fdc9fc0aa01d5f03beec385939e885,PodSandboxId:af1db69584833a352404ac369d09504166c678f9aa4b89facb0dd0607707cc23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698800773059095055,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaba9583-e564-4804-9cd3-2b4de36c85da,},Annotations:map[string]string{io.kubernetes.container.hash: ac50747,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8745454c3ba65b39dcca4ec859db6eec9bde20e8655cef3187db49575282aa10,PodSandboxId:cfe8082e2628799d9efe7b672e81ddcad90a99dd001281bd3e01c9e33fb9b901,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698800772252049052,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kzgzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d59980-f28a-482c-9aa8-8502915417f0,},Annotations:map[string]string{io.kubernetes.container.hash: a7c19628,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22be33464e2b616389f8c1c9fe097420418464330ebba5269746922fb0dead46,PodSandboxId:7fbb88518b5739fdad0ad3c9ab7d26f2a104dc851ad0c5a93651276faa04d55a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698800771247642609,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rgzt8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d136c6a-e0b2-44c3-a17b-85649d6ff7b7,},Annotations:map[string]string{io.kubernetes.container.hash: c0e462ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f060c7640be39d364fe8967ac8f38f7e607548707a374220ef0feb1305678cf3,PodSandboxId:d5a7315324b17f0871c73c4759bac5ae2592a914739929d59ec9a6545d9acf35,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698800747606688658,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-639310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48958f49a129074eec
3f767ffb1dddd1,},Annotations:map[string]string{io.kubernetes.container.hash: 7ec6a6b6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:178b076f51f7e8b659548480b9d8ff724f062bfec5c6ec0c3084b6d182210a51,PodSandboxId:b24c09745f83dc0eb98666502bab147baf943158dfd4f937ac3eff1a6e79f77c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698800747007400537,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-639310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 758e58c461773c5e0d
a7f3fa9c9b2628,},Annotations:map[string]string{io.kubernetes.container.hash: c78fc5b1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa1d8001c088acee185e9dad86cadfffdb1d5d2b62e785ec1ecd9cf0628faa80,PodSandboxId:8ec5d4b8331c34bf93d80dd0902a599768985a0fa2db30d361a1603fbe6958dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698800746803011038,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-639310,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 22af6e3d9158739e028e940aca1196e5,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203adcd67c53b37280b1e4ca576ca64ec2acf717740fea98d8ab311db9f57ed3,PodSandboxId:4b46efeec38d55eccf4d2a8220af4bfeb16484d377e7133b99afc539a4f7659c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698800746611905028,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-639310,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 76224889b8a2b452d7f7b1ab03f60615,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7ded45f3-db80-4b43-905e-d21fb67db3df name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:21:30 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:21:30.177029465Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b64a00ff-074b-4f29-af4a-f6393bf12fa0 name=/runtime.v1.RuntimeService/Version
	Nov 01 01:21:30 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:21:30.177119897Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b64a00ff-074b-4f29-af4a-f6393bf12fa0 name=/runtime.v1.RuntimeService/Version
	Nov 01 01:21:30 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:21:30.179246746Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=426e07b9-5326-40a2-98c2-aa865a8b83d3 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:21:30 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:21:30.180206308Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698801690180074168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=426e07b9-5326-40a2-98c2-aa865a8b83d3 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:21:30 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:21:30.180949250Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fecf172f-78c4-4bf0-a24a-f33886804b73 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:21:30 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:21:30.181023775Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fecf172f-78c4-4bf0-a24a-f33886804b73 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:21:30 default-k8s-diff-port-639310 crio[713]: time="2023-11-01 01:21:30.181331147Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a85b8e74173dbc34ea106aa829e909bb3fdc9fc0aa01d5f03beec385939e885,PodSandboxId:af1db69584833a352404ac369d09504166c678f9aa4b89facb0dd0607707cc23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698800773059095055,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaba9583-e564-4804-9cd3-2b4de36c85da,},Annotations:map[string]string{io.kubernetes.container.hash: ac50747,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8745454c3ba65b39dcca4ec859db6eec9bde20e8655cef3187db49575282aa10,PodSandboxId:cfe8082e2628799d9efe7b672e81ddcad90a99dd001281bd3e01c9e33fb9b901,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698800772252049052,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kzgzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d59980-f28a-482c-9aa8-8502915417f0,},Annotations:map[string]string{io.kubernetes.container.hash: a7c19628,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22be33464e2b616389f8c1c9fe097420418464330ebba5269746922fb0dead46,PodSandboxId:7fbb88518b5739fdad0ad3c9ab7d26f2a104dc851ad0c5a93651276faa04d55a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698800771247642609,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rgzt8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d136c6a-e0b2-44c3-a17b-85649d6ff7b7,},Annotations:map[string]string{io.kubernetes.container.hash: c0e462ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f060c7640be39d364fe8967ac8f38f7e607548707a374220ef0feb1305678cf3,PodSandboxId:d5a7315324b17f0871c73c4759bac5ae2592a914739929d59ec9a6545d9acf35,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698800747606688658,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-639310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48958f49a129074eec
3f767ffb1dddd1,},Annotations:map[string]string{io.kubernetes.container.hash: 7ec6a6b6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:178b076f51f7e8b659548480b9d8ff724f062bfec5c6ec0c3084b6d182210a51,PodSandboxId:b24c09745f83dc0eb98666502bab147baf943158dfd4f937ac3eff1a6e79f77c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698800747007400537,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-639310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 758e58c461773c5e0d
a7f3fa9c9b2628,},Annotations:map[string]string{io.kubernetes.container.hash: c78fc5b1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa1d8001c088acee185e9dad86cadfffdb1d5d2b62e785ec1ecd9cf0628faa80,PodSandboxId:8ec5d4b8331c34bf93d80dd0902a599768985a0fa2db30d361a1603fbe6958dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698800746803011038,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-639310,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 22af6e3d9158739e028e940aca1196e5,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203adcd67c53b37280b1e4ca576ca64ec2acf717740fea98d8ab311db9f57ed3,PodSandboxId:4b46efeec38d55eccf4d2a8220af4bfeb16484d377e7133b99afc539a4f7659c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698800746611905028,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-639310,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 76224889b8a2b452d7f7b1ab03f60615,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fecf172f-78c4-4bf0-a24a-f33886804b73 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5a85b8e74173d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   af1db69584833       storage-provisioner
	8745454c3ba65       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   15 minutes ago      Running             kube-proxy                0                   cfe8082e26287       kube-proxy-kzgzn
	22be33464e2b6       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   15 minutes ago      Running             coredns                   0                   7fbb88518b573       coredns-5dd5756b68-rgzt8
	f060c7640be39       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   15 minutes ago      Running             etcd                      2                   d5a7315324b17       etcd-default-k8s-diff-port-639310
	178b076f51f7e       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   15 minutes ago      Running             kube-apiserver            2                   b24c09745f83d       kube-apiserver-default-k8s-diff-port-639310
	fa1d8001c088a       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   15 minutes ago      Running             kube-controller-manager   2                   8ec5d4b8331c3       kube-controller-manager-default-k8s-diff-port-639310
	203adcd67c53b       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   15 minutes ago      Running             kube-scheduler            2                   4b46efeec38d5       kube-scheduler-default-k8s-diff-port-639310
	
	* 
	* ==> coredns [22be33464e2b616389f8c1c9fe097420418464330ebba5269746922fb0dead46] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	[INFO] Reloading complete
	[INFO] 127.0.0.1:37063 - 58527 "HINFO IN 1214933915492992955.7899154619537145924. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010486748s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-639310
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-639310
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9
	                    minikube.k8s.io/name=default-k8s-diff-port-639310
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_01T01_05_56_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Nov 2023 01:05:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-639310
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Nov 2023 01:21:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Nov 2023 01:16:30 +0000   Wed, 01 Nov 2023 01:05:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Nov 2023 01:16:30 +0000   Wed, 01 Nov 2023 01:05:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Nov 2023 01:16:30 +0000   Wed, 01 Nov 2023 01:05:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Nov 2023 01:16:30 +0000   Wed, 01 Nov 2023 01:06:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.97
	  Hostname:    default-k8s-diff-port-639310
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 51f666f978d14a798a25b310a75e9d1b
	  System UUID:                51f666f9-78d1-4a79-8a25-b310a75e9d1b
	  Boot ID:                    b1b0235a-b85b-46ce-90bc-48cb264be07e
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-rgzt8                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-default-k8s-diff-port-639310                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-default-k8s-diff-port-639310             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-639310    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-kzgzn                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-default-k8s-diff-port-639310             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-57f55c9bc5-65ph4                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node default-k8s-diff-port-639310 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node default-k8s-diff-port-639310 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node default-k8s-diff-port-639310 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node default-k8s-diff-port-639310 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node default-k8s-diff-port-639310 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node default-k8s-diff-port-639310 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             15m                kubelet          Node default-k8s-diff-port-639310 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeReady                15m                kubelet          Node default-k8s-diff-port-639310 status is now: NodeReady
	  Normal  RegisteredNode           15m                node-controller  Node default-k8s-diff-port-639310 event: Registered Node default-k8s-diff-port-639310 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov 1 01:00] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.064736] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.600812] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.925421] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.139325] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.404456] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000012] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.614211] systemd-fstab-generator[637]: Ignoring "noauto" for root device
	[  +0.135036] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.171280] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.117222] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.266267] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[Nov 1 01:01] systemd-fstab-generator[911]: Ignoring "noauto" for root device
	[ +20.273945] kauditd_printk_skb: 29 callbacks suppressed
	[Nov 1 01:05] systemd-fstab-generator[3480]: Ignoring "noauto" for root device
	[ +10.817729] systemd-fstab-generator[3799]: Ignoring "noauto" for root device
	[Nov 1 01:06] kauditd_printk_skb: 2 callbacks suppressed
	[ +12.927497] kauditd_printk_skb: 9 callbacks suppressed
	
	* 
	* ==> etcd [f060c7640be39d364fe8967ac8f38f7e607548707a374220ef0feb1305678cf3] <==
	* {"level":"info","ts":"2023-11-01T01:05:50.067694Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c03d6f8665d98ba received MsgVoteResp from c03d6f8665d98ba at term 2"}
	{"level":"info","ts":"2023-11-01T01:05:50.06773Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c03d6f8665d98ba became leader at term 2"}
	{"level":"info","ts":"2023-11-01T01:05:50.067836Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c03d6f8665d98ba elected leader c03d6f8665d98ba at term 2"}
	{"level":"info","ts":"2023-11-01T01:05:50.07204Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T01:05:50.072081Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c03d6f8665d98ba","local-member-attributes":"{Name:default-k8s-diff-port-639310 ClientURLs:[https://192.168.72.97:2379]}","request-path":"/0/members/c03d6f8665d98ba/attributes","cluster-id":"d703df346b154168","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-01T01:05:50.073101Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-01T01:05:50.073533Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-01T01:05:50.073672Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-01T01:05:50.073216Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-01T01:05:50.074483Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-01T01:05:50.075299Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.97:2379"}
	{"level":"info","ts":"2023-11-01T01:05:50.086586Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d703df346b154168","local-member-id":"c03d6f8665d98ba","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T01:05:50.08688Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T01:05:50.08696Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T01:15:50.845284Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":724}
	{"level":"info","ts":"2023-11-01T01:15:50.848645Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":724,"took":"2.821991ms","hash":2702205090}
	{"level":"info","ts":"2023-11-01T01:15:50.848781Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2702205090,"revision":724,"compact-revision":-1}
	{"level":"info","ts":"2023-11-01T01:20:50.85486Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":968}
	{"level":"info","ts":"2023-11-01T01:20:50.857232Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":968,"took":"1.714103ms","hash":2450687846}
	{"level":"info","ts":"2023-11-01T01:20:50.857857Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2450687846,"revision":968,"compact-revision":724}
	{"level":"info","ts":"2023-11-01T01:20:58.894361Z","caller":"traceutil/trace.go:171","msg":"trace[571376811] linearizableReadLoop","detail":"{readStateIndex:1412; appliedIndex:1411; }","duration":"229.91377ms","start":"2023-11-01T01:20:58.664405Z","end":"2023-11-01T01:20:58.894319Z","steps":["trace[571376811] 'read index received'  (duration: 229.681284ms)","trace[571376811] 'applied index is now lower than readState.Index'  (duration: 231.857µs)"],"step_count":2}
	{"level":"info","ts":"2023-11-01T01:20:58.894602Z","caller":"traceutil/trace.go:171","msg":"trace[1411906077] transaction","detail":"{read_only:false; response_revision:1218; number_of_response:1; }","duration":"231.682817ms","start":"2023-11-01T01:20:58.662878Z","end":"2023-11-01T01:20:58.894561Z","steps":["trace[1411906077] 'process raft request'  (duration: 231.243031ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-01T01:20:58.89473Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"230.171177ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-01T01:20:58.894858Z","caller":"traceutil/trace.go:171","msg":"trace[1083463645] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1218; }","duration":"230.463589ms","start":"2023-11-01T01:20:58.664383Z","end":"2023-11-01T01:20:58.894847Z","steps":["trace[1083463645] 'agreement among raft nodes before linearized reading'  (duration: 230.130625ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-01T01:20:59.845418Z","caller":"traceutil/trace.go:171","msg":"trace[454058196] transaction","detail":"{read_only:false; response_revision:1219; number_of_response:1; }","duration":"153.54792ms","start":"2023-11-01T01:20:59.691853Z","end":"2023-11-01T01:20:59.845401Z","steps":["trace[454058196] 'process raft request'  (duration: 89.451609ms)","trace[454058196] 'compare'  (duration: 63.940412ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  01:21:30 up 20 min,  0 users,  load average: 0.70, 0.25, 0.18
	Linux default-k8s-diff-port-639310 5.10.57 #1 SMP Tue Oct 31 22:14:31 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [178b076f51f7e8b659548480b9d8ff724f062bfec5c6ec0c3084b6d182210a51] <==
	* E1101 01:16:53.496114       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1101 01:16:53.496238       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1101 01:17:52.378401       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1101 01:18:52.378279       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1101 01:18:53.495777       1 handler_proxy.go:93] no RequestInfo found in the context
	E1101 01:18:53.495851       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1101 01:18:53.495860       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1101 01:18:53.497239       1 handler_proxy.go:93] no RequestInfo found in the context
	E1101 01:18:53.497327       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1101 01:18:53.497368       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1101 01:19:52.378827       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1101 01:20:52.378645       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1101 01:20:52.501741       1 handler_proxy.go:93] no RequestInfo found in the context
	E1101 01:20:52.501946       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1101 01:20:52.502669       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1101 01:20:53.502880       1 handler_proxy.go:93] no RequestInfo found in the context
	W1101 01:20:53.502883       1 handler_proxy.go:93] no RequestInfo found in the context
	E1101 01:20:53.503209       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1101 01:20:53.503234       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1101 01:20:53.503087       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1101 01:20:53.505324       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [fa1d8001c088acee185e9dad86cadfffdb1d5d2b62e785ec1ecd9cf0628faa80] <==
	* I1101 01:15:38.078708       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:16:07.611353       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:16:08.087890       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:16:37.617407       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:16:38.101808       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:17:07.624298       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:17:08.111634       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1101 01:17:14.547848       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="270.333µs"
	I1101 01:17:26.550828       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="118.778µs"
	E1101 01:17:37.629444       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:17:38.128877       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:18:07.635003       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:18:08.138931       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:18:37.640943       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:18:38.148911       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:19:07.647098       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:19:08.164109       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:19:37.653497       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:19:38.174655       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:20:07.661075       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:20:08.185575       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:20:37.668480       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:20:38.195572       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:21:07.675810       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:21:08.205443       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [8745454c3ba65b39dcca4ec859db6eec9bde20e8655cef3187db49575282aa10] <==
	* I1101 01:06:13.113818       1 server_others.go:69] "Using iptables proxy"
	I1101 01:06:13.161889       1 node.go:141] Successfully retrieved node IP: 192.168.72.97
	I1101 01:06:13.299215       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1101 01:06:13.299345       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 01:06:13.302417       1 server_others.go:152] "Using iptables Proxier"
	I1101 01:06:13.303537       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 01:06:13.307214       1 server.go:846] "Version info" version="v1.28.3"
	I1101 01:06:13.307431       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 01:06:13.311259       1 config.go:188] "Starting service config controller"
	I1101 01:06:13.311596       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 01:06:13.311654       1 config.go:97] "Starting endpoint slice config controller"
	I1101 01:06:13.311672       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 01:06:13.314056       1 config.go:315] "Starting node config controller"
	I1101 01:06:13.314106       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 01:06:13.412086       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1101 01:06:13.412103       1 shared_informer.go:318] Caches are synced for service config
	I1101 01:06:13.415474       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [203adcd67c53b37280b1e4ca576ca64ec2acf717740fea98d8ab311db9f57ed3] <==
	* W1101 01:05:52.572349       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1101 01:05:52.572614       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1101 01:05:52.572635       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1101 01:05:52.572730       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1101 01:05:53.399206       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1101 01:05:53.399258       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1101 01:05:53.456518       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1101 01:05:53.456571       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1101 01:05:53.471411       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1101 01:05:53.471572       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1101 01:05:53.541709       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1101 01:05:53.541766       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1101 01:05:53.550451       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1101 01:05:53.550542       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1101 01:05:53.557388       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1101 01:05:53.557487       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1101 01:05:53.645840       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1101 01:05:53.645886       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1101 01:05:53.764061       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1101 01:05:53.764264       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 01:05:53.899340       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1101 01:05:53.899382       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1101 01:05:53.939919       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1101 01:05:53.939979       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1101 01:05:55.548472       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-11-01 01:00:47 UTC, ends at Wed 2023-11-01 01:21:30 UTC. --
	Nov 01 01:18:56 default-k8s-diff-port-639310 kubelet[3806]: E1101 01:18:56.559107    3806 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 01 01:18:56 default-k8s-diff-port-639310 kubelet[3806]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 01 01:18:56 default-k8s-diff-port-639310 kubelet[3806]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 01 01:18:56 default-k8s-diff-port-639310 kubelet[3806]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 01 01:19:03 default-k8s-diff-port-639310 kubelet[3806]: E1101 01:19:03.526883    3806 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-65ph4" podUID="4683706e-65f6-4845-a5ad-60da8cd20d8e"
	Nov 01 01:19:15 default-k8s-diff-port-639310 kubelet[3806]: E1101 01:19:15.527752    3806 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-65ph4" podUID="4683706e-65f6-4845-a5ad-60da8cd20d8e"
	Nov 01 01:19:27 default-k8s-diff-port-639310 kubelet[3806]: E1101 01:19:27.527229    3806 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-65ph4" podUID="4683706e-65f6-4845-a5ad-60da8cd20d8e"
	Nov 01 01:19:42 default-k8s-diff-port-639310 kubelet[3806]: E1101 01:19:42.527476    3806 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-65ph4" podUID="4683706e-65f6-4845-a5ad-60da8cd20d8e"
	Nov 01 01:19:56 default-k8s-diff-port-639310 kubelet[3806]: E1101 01:19:56.529060    3806 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-65ph4" podUID="4683706e-65f6-4845-a5ad-60da8cd20d8e"
	Nov 01 01:19:56 default-k8s-diff-port-639310 kubelet[3806]: E1101 01:19:56.559753    3806 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 01 01:19:56 default-k8s-diff-port-639310 kubelet[3806]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 01 01:19:56 default-k8s-diff-port-639310 kubelet[3806]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 01 01:19:56 default-k8s-diff-port-639310 kubelet[3806]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 01 01:20:09 default-k8s-diff-port-639310 kubelet[3806]: E1101 01:20:09.527207    3806 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-65ph4" podUID="4683706e-65f6-4845-a5ad-60da8cd20d8e"
	Nov 01 01:20:20 default-k8s-diff-port-639310 kubelet[3806]: E1101 01:20:20.529492    3806 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-65ph4" podUID="4683706e-65f6-4845-a5ad-60da8cd20d8e"
	Nov 01 01:20:33 default-k8s-diff-port-639310 kubelet[3806]: E1101 01:20:33.526653    3806 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-65ph4" podUID="4683706e-65f6-4845-a5ad-60da8cd20d8e"
	Nov 01 01:20:46 default-k8s-diff-port-639310 kubelet[3806]: E1101 01:20:46.528097    3806 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-65ph4" podUID="4683706e-65f6-4845-a5ad-60da8cd20d8e"
	Nov 01 01:20:56 default-k8s-diff-port-639310 kubelet[3806]: E1101 01:20:56.559672    3806 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 01 01:20:56 default-k8s-diff-port-639310 kubelet[3806]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 01 01:20:56 default-k8s-diff-port-639310 kubelet[3806]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 01 01:20:56 default-k8s-diff-port-639310 kubelet[3806]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 01 01:20:56 default-k8s-diff-port-639310 kubelet[3806]: E1101 01:20:56.619011    3806 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Nov 01 01:20:58 default-k8s-diff-port-639310 kubelet[3806]: E1101 01:20:58.528670    3806 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-65ph4" podUID="4683706e-65f6-4845-a5ad-60da8cd20d8e"
	Nov 01 01:21:12 default-k8s-diff-port-639310 kubelet[3806]: E1101 01:21:12.528463    3806 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-65ph4" podUID="4683706e-65f6-4845-a5ad-60da8cd20d8e"
	Nov 01 01:21:23 default-k8s-diff-port-639310 kubelet[3806]: E1101 01:21:23.526823    3806 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-65ph4" podUID="4683706e-65f6-4845-a5ad-60da8cd20d8e"
	
	* 
	* ==> storage-provisioner [5a85b8e74173dbc34ea106aa829e909bb3fdc9fc0aa01d5f03beec385939e885] <==
	* I1101 01:06:13.243855       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 01:06:13.264607       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 01:06:13.264799       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1101 01:06:13.281818       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 01:06:13.282122       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-639310_81ff36f9-e443-479a-89fd-151df6d8833d!
	I1101 01:06:13.289680       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0dee62d4-05e8-4647-9976-47e7e68b166b", APIVersion:"v1", ResourceVersion:"458", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-639310_81ff36f9-e443-479a-89fd-151df6d8833d became leader
	I1101 01:06:13.383184       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-639310_81ff36f9-e443-479a-89fd-151df6d8833d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-639310 -n default-k8s-diff-port-639310
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-639310 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-65ph4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-639310 describe pod metrics-server-57f55c9bc5-65ph4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-639310 describe pod metrics-server-57f55c9bc5-65ph4: exit status 1 (70.016651ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-65ph4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-639310 describe pod metrics-server-57f55c9bc5-65ph4: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (363.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (309.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1101 01:16:14.122187   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/auto-090856/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-008483 -n no-preload-008483
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-11-01 01:21:04.035022754 +0000 UTC m=+5837.599604752
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-008483 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-008483 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.843µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-008483 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-008483 -n no-preload-008483
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-008483 logs -n 25
E1101 01:21:05.552871   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-008483 logs -n 25: (1.426114747s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p flannel-090856 sudo                                 | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |                |                     |                     |
	| start   | -p embed-certs-754132                                  | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:52 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| ssh     | -p flannel-090856 sudo find                            | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |                |                     |                     |
	| ssh     | -p flannel-090856 sudo crio                            | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | config                                                 |                              |         |                |                     |                     |
	| delete  | -p flannel-090856                                      | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	| delete  | -p                                                     | disable-driver-mounts-130996 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | disable-driver-mounts-130996                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:53 UTC |
	|         | default-k8s-diff-port-639310                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-008483             | no-preload-008483            | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC | 01 Nov 23 00:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-008483                                   | no-preload-008483            | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-754132            | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC | 01 Nov 23 00:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-754132                                  | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-330042        | old-k8s-version-330042       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC | 01 Nov 23 00:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-330042                              | old-k8s-version-330042       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-639310  | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:53 UTC | 01 Nov 23 00:53 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:53 UTC |                     |
	|         | default-k8s-diff-port-639310                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-008483                  | no-preload-008483            | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-754132                 | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-008483                                   | no-preload-008483            | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC | 01 Nov 23 01:06 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| start   | -p embed-certs-754132                                  | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC | 01 Nov 23 01:05 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-330042             | old-k8s-version-330042       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-330042                              | old-k8s-version-330042       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC | 01 Nov 23 01:07 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-639310       | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:56 UTC | 01 Nov 23 01:06 UTC |
	|         | default-k8s-diff-port-639310                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| delete  | -p old-k8s-version-330042                              | old-k8s-version-330042       | jenkins | v1.32.0-beta.0 | 01 Nov 23 01:20 UTC | 01 Nov 23 01:20 UTC |
	| start   | -p newest-cni-816754 --memory=2200 --alsologtostderr   | newest-cni-816754            | jenkins | v1.32.0-beta.0 | 01 Nov 23 01:20 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/01 01:20:26
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 01:20:26.901577   64625 out.go:296] Setting OutFile to fd 1 ...
	I1101 01:20:26.901877   64625 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 01:20:26.901887   64625 out.go:309] Setting ErrFile to fd 2...
	I1101 01:20:26.901895   64625 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 01:20:26.902108   64625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7305/.minikube/bin
	I1101 01:20:26.902738   64625 out.go:303] Setting JSON to false
	I1101 01:20:26.903795   64625 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7372,"bootTime":1698794255,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 01:20:26.903867   64625 start.go:138] virtualization: kvm guest
	I1101 01:20:26.906222   64625 out.go:177] * [newest-cni-816754] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1101 01:20:26.907419   64625 out.go:177]   - MINIKUBE_LOCATION=17486
	I1101 01:20:26.907510   64625 notify.go:220] Checking for updates...
	I1101 01:20:26.908569   64625 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 01:20:26.909780   64625 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 01:20:26.911040   64625 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7305/.minikube
	I1101 01:20:26.912350   64625 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 01:20:26.913709   64625 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 01:20:26.915573   64625 config.go:182] Loaded profile config "default-k8s-diff-port-639310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:20:26.915683   64625 config.go:182] Loaded profile config "embed-certs-754132": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:20:26.915774   64625 config.go:182] Loaded profile config "no-preload-008483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:20:26.915865   64625 driver.go:378] Setting default libvirt URI to qemu:///system
	I1101 01:20:26.957402   64625 out.go:177] * Using the kvm2 driver based on user configuration
	I1101 01:20:26.959160   64625 start.go:298] selected driver: kvm2
	I1101 01:20:26.959182   64625 start.go:902] validating driver "kvm2" against <nil>
	I1101 01:20:26.959194   64625 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 01:20:26.959984   64625 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 01:20:26.960073   64625 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17486-7305/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1101 01:20:26.976448   64625 install.go:137] /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1101 01:20:26.976536   64625 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	W1101 01:20:26.976585   64625 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1101 01:20:26.976861   64625 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 01:20:26.976937   64625 cni.go:84] Creating CNI manager for ""
	I1101 01:20:26.976955   64625 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:20:26.976973   64625 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1101 01:20:26.976985   64625 start_flags.go:323] config:
	{Name:newest-cni-816754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-816754 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 01:20:26.977171   64625 iso.go:125] acquiring lock: {Name:mk1f649ca0b7c1ae293cd66cb85f9eeda028b20b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 01:20:26.979480   64625 out.go:177] * Starting control plane node newest-cni-816754 in cluster newest-cni-816754
	I1101 01:20:26.981114   64625 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 01:20:26.981177   64625 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1101 01:20:26.981212   64625 cache.go:56] Caching tarball of preloaded images
	I1101 01:20:26.981372   64625 preload.go:174] Found /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 01:20:26.981391   64625 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1101 01:20:26.981513   64625 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/config.json ...
	I1101 01:20:26.981539   64625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/config.json: {Name:mk93f245040cb932920ceaccd9b3116731eb7701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:20:26.981715   64625 start.go:365] acquiring machines lock for newest-cni-816754: {Name:mk7aad88408c319111b9be8e59d9593a9e88374b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 01:20:26.981773   64625 start.go:369] acquired machines lock for "newest-cni-816754" in 41.542µs
	I1101 01:20:26.981798   64625 start.go:93] Provisioning new machine with config: &{Name:newest-cni-816754 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-816754 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenki
ns:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 01:20:26.981922   64625 start.go:125] createHost starting for "" (driver="kvm2")
	I1101 01:20:26.984486   64625 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1101 01:20:26.984675   64625 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:20:26.984734   64625 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:20:26.999310   64625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40233
	I1101 01:20:26.999922   64625 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:20:27.000559   64625 main.go:141] libmachine: Using API Version  1
	I1101 01:20:27.000633   64625 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:20:27.001131   64625 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:20:27.001331   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetMachineName
	I1101 01:20:27.001486   64625 main.go:141] libmachine: (newest-cni-816754) Calling .DriverName
	I1101 01:20:27.001663   64625 start.go:159] libmachine.API.Create for "newest-cni-816754" (driver="kvm2")
	I1101 01:20:27.001704   64625 client.go:168] LocalClient.Create starting
	I1101 01:20:27.001749   64625 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem
	I1101 01:20:27.001823   64625 main.go:141] libmachine: Decoding PEM data...
	I1101 01:20:27.001848   64625 main.go:141] libmachine: Parsing certificate...
	I1101 01:20:27.001921   64625 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem
	I1101 01:20:27.001951   64625 main.go:141] libmachine: Decoding PEM data...
	I1101 01:20:27.001968   64625 main.go:141] libmachine: Parsing certificate...
	I1101 01:20:27.001996   64625 main.go:141] libmachine: Running pre-create checks...
	I1101 01:20:27.002010   64625 main.go:141] libmachine: (newest-cni-816754) Calling .PreCreateCheck
	I1101 01:20:27.002505   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetConfigRaw
	I1101 01:20:27.003066   64625 main.go:141] libmachine: Creating machine...
	I1101 01:20:27.003087   64625 main.go:141] libmachine: (newest-cni-816754) Calling .Create
	I1101 01:20:27.003248   64625 main.go:141] libmachine: (newest-cni-816754) Creating KVM machine...
	I1101 01:20:27.005057   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found existing default KVM network
	I1101 01:20:27.006772   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:27.006629   64648 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000112c40}
	I1101 01:20:27.012945   64625 main.go:141] libmachine: (newest-cni-816754) DBG | trying to create private KVM network mk-newest-cni-816754 192.168.39.0/24...
	I1101 01:20:27.098635   64625 main.go:141] libmachine: (newest-cni-816754) DBG | private KVM network mk-newest-cni-816754 192.168.39.0/24 created
	I1101 01:20:27.098677   64625 main.go:141] libmachine: (newest-cni-816754) Setting up store path in /home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754 ...
	I1101 01:20:27.098714   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:27.098609   64648 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17486-7305/.minikube
	I1101 01:20:27.098753   64625 main.go:141] libmachine: (newest-cni-816754) Building disk image from file:///home/jenkins/minikube-integration/17486-7305/.minikube/cache/iso/amd64/minikube-v1.32.0-1698773592-17486-amd64.iso
	I1101 01:20:27.098787   64625 main.go:141] libmachine: (newest-cni-816754) Downloading /home/jenkins/minikube-integration/17486-7305/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17486-7305/.minikube/cache/iso/amd64/minikube-v1.32.0-1698773592-17486-amd64.iso...
	I1101 01:20:27.330302   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:27.330064   64648 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754/id_rsa...
	I1101 01:20:27.606617   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:27.606462   64648 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754/newest-cni-816754.rawdisk...
	I1101 01:20:27.606653   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Writing magic tar header
	I1101 01:20:27.606677   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Writing SSH key tar header
	I1101 01:20:27.606784   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:27.606706   64648 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754 ...
	I1101 01:20:27.606843   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754
	I1101 01:20:27.606862   64625 main.go:141] libmachine: (newest-cni-816754) Setting executable bit set on /home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754 (perms=drwx------)
	I1101 01:20:27.606872   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17486-7305/.minikube/machines
	I1101 01:20:27.606888   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17486-7305/.minikube
	I1101 01:20:27.606899   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17486-7305
	I1101 01:20:27.606926   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1101 01:20:27.606938   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Checking permissions on dir: /home/jenkins
	I1101 01:20:27.606955   64625 main.go:141] libmachine: (newest-cni-816754) Setting executable bit set on /home/jenkins/minikube-integration/17486-7305/.minikube/machines (perms=drwxr-xr-x)
	I1101 01:20:27.606966   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Checking permissions on dir: /home
	I1101 01:20:27.606983   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Skipping /home - not owner
	I1101 01:20:27.606996   64625 main.go:141] libmachine: (newest-cni-816754) Setting executable bit set on /home/jenkins/minikube-integration/17486-7305/.minikube (perms=drwxr-xr-x)
	I1101 01:20:27.607004   64625 main.go:141] libmachine: (newest-cni-816754) Setting executable bit set on /home/jenkins/minikube-integration/17486-7305 (perms=drwxrwxr-x)
	I1101 01:20:27.607017   64625 main.go:141] libmachine: (newest-cni-816754) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1101 01:20:27.607032   64625 main.go:141] libmachine: (newest-cni-816754) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1101 01:20:27.607050   64625 main.go:141] libmachine: (newest-cni-816754) Creating domain...
	I1101 01:20:27.608546   64625 main.go:141] libmachine: (newest-cni-816754) define libvirt domain using xml: 
	I1101 01:20:27.608579   64625 main.go:141] libmachine: (newest-cni-816754) <domain type='kvm'>
	I1101 01:20:27.608590   64625 main.go:141] libmachine: (newest-cni-816754)   <name>newest-cni-816754</name>
	I1101 01:20:27.608596   64625 main.go:141] libmachine: (newest-cni-816754)   <memory unit='MiB'>2200</memory>
	I1101 01:20:27.608602   64625 main.go:141] libmachine: (newest-cni-816754)   <vcpu>2</vcpu>
	I1101 01:20:27.608610   64625 main.go:141] libmachine: (newest-cni-816754)   <features>
	I1101 01:20:27.608619   64625 main.go:141] libmachine: (newest-cni-816754)     <acpi/>
	I1101 01:20:27.608632   64625 main.go:141] libmachine: (newest-cni-816754)     <apic/>
	I1101 01:20:27.608644   64625 main.go:141] libmachine: (newest-cni-816754)     <pae/>
	I1101 01:20:27.608655   64625 main.go:141] libmachine: (newest-cni-816754)     
	I1101 01:20:27.608668   64625 main.go:141] libmachine: (newest-cni-816754)   </features>
	I1101 01:20:27.608678   64625 main.go:141] libmachine: (newest-cni-816754)   <cpu mode='host-passthrough'>
	I1101 01:20:27.608705   64625 main.go:141] libmachine: (newest-cni-816754)   
	I1101 01:20:27.608725   64625 main.go:141] libmachine: (newest-cni-816754)   </cpu>
	I1101 01:20:27.608732   64625 main.go:141] libmachine: (newest-cni-816754)   <os>
	I1101 01:20:27.608751   64625 main.go:141] libmachine: (newest-cni-816754)     <type>hvm</type>
	I1101 01:20:27.608760   64625 main.go:141] libmachine: (newest-cni-816754)     <boot dev='cdrom'/>
	I1101 01:20:27.608766   64625 main.go:141] libmachine: (newest-cni-816754)     <boot dev='hd'/>
	I1101 01:20:27.608775   64625 main.go:141] libmachine: (newest-cni-816754)     <bootmenu enable='no'/>
	I1101 01:20:27.608780   64625 main.go:141] libmachine: (newest-cni-816754)   </os>
	I1101 01:20:27.608786   64625 main.go:141] libmachine: (newest-cni-816754)   <devices>
	I1101 01:20:27.608793   64625 main.go:141] libmachine: (newest-cni-816754)     <disk type='file' device='cdrom'>
	I1101 01:20:27.608805   64625 main.go:141] libmachine: (newest-cni-816754)       <source file='/home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754/boot2docker.iso'/>
	I1101 01:20:27.608817   64625 main.go:141] libmachine: (newest-cni-816754)       <target dev='hdc' bus='scsi'/>
	I1101 01:20:27.608828   64625 main.go:141] libmachine: (newest-cni-816754)       <readonly/>
	I1101 01:20:27.608838   64625 main.go:141] libmachine: (newest-cni-816754)     </disk>
	I1101 01:20:27.608865   64625 main.go:141] libmachine: (newest-cni-816754)     <disk type='file' device='disk'>
	I1101 01:20:27.608886   64625 main.go:141] libmachine: (newest-cni-816754)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1101 01:20:27.608904   64625 main.go:141] libmachine: (newest-cni-816754)       <source file='/home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754/newest-cni-816754.rawdisk'/>
	I1101 01:20:27.608918   64625 main.go:141] libmachine: (newest-cni-816754)       <target dev='hda' bus='virtio'/>
	I1101 01:20:27.608932   64625 main.go:141] libmachine: (newest-cni-816754)     </disk>
	I1101 01:20:27.608945   64625 main.go:141] libmachine: (newest-cni-816754)     <interface type='network'>
	I1101 01:20:27.608968   64625 main.go:141] libmachine: (newest-cni-816754)       <source network='mk-newest-cni-816754'/>
	I1101 01:20:27.608981   64625 main.go:141] libmachine: (newest-cni-816754)       <model type='virtio'/>
	I1101 01:20:27.608997   64625 main.go:141] libmachine: (newest-cni-816754)     </interface>
	I1101 01:20:27.609012   64625 main.go:141] libmachine: (newest-cni-816754)     <interface type='network'>
	I1101 01:20:27.609025   64625 main.go:141] libmachine: (newest-cni-816754)       <source network='default'/>
	I1101 01:20:27.609039   64625 main.go:141] libmachine: (newest-cni-816754)       <model type='virtio'/>
	I1101 01:20:27.609051   64625 main.go:141] libmachine: (newest-cni-816754)     </interface>
	I1101 01:20:27.609064   64625 main.go:141] libmachine: (newest-cni-816754)     <serial type='pty'>
	I1101 01:20:27.609073   64625 main.go:141] libmachine: (newest-cni-816754)       <target port='0'/>
	I1101 01:20:27.609081   64625 main.go:141] libmachine: (newest-cni-816754)     </serial>
	I1101 01:20:27.609098   64625 main.go:141] libmachine: (newest-cni-816754)     <console type='pty'>
	I1101 01:20:27.609113   64625 main.go:141] libmachine: (newest-cni-816754)       <target type='serial' port='0'/>
	I1101 01:20:27.609128   64625 main.go:141] libmachine: (newest-cni-816754)     </console>
	I1101 01:20:27.609142   64625 main.go:141] libmachine: (newest-cni-816754)     <rng model='virtio'>
	I1101 01:20:27.609153   64625 main.go:141] libmachine: (newest-cni-816754)       <backend model='random'>/dev/random</backend>
	I1101 01:20:27.609165   64625 main.go:141] libmachine: (newest-cni-816754)     </rng>
	I1101 01:20:27.609180   64625 main.go:141] libmachine: (newest-cni-816754)     
	I1101 01:20:27.609190   64625 main.go:141] libmachine: (newest-cni-816754)     
	I1101 01:20:27.609202   64625 main.go:141] libmachine: (newest-cni-816754)   </devices>
	I1101 01:20:27.609216   64625 main.go:141] libmachine: (newest-cni-816754) </domain>
	I1101 01:20:27.609227   64625 main.go:141] libmachine: (newest-cni-816754) 
	I1101 01:20:27.613657   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:b4:0b:bf in network default
	I1101 01:20:27.614304   64625 main.go:141] libmachine: (newest-cni-816754) Ensuring networks are active...
	I1101 01:20:27.614325   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:27.615098   64625 main.go:141] libmachine: (newest-cni-816754) Ensuring network default is active
	I1101 01:20:27.615425   64625 main.go:141] libmachine: (newest-cni-816754) Ensuring network mk-newest-cni-816754 is active
	I1101 01:20:27.615881   64625 main.go:141] libmachine: (newest-cni-816754) Getting domain xml...
	I1101 01:20:27.616713   64625 main.go:141] libmachine: (newest-cni-816754) Creating domain...
	I1101 01:20:28.945794   64625 main.go:141] libmachine: (newest-cni-816754) Waiting to get IP...
	I1101 01:20:28.946695   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:28.947110   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:28.947197   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:28.947105   64648 retry.go:31] will retry after 218.225741ms: waiting for machine to come up
	I1101 01:20:29.166699   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:29.167318   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:29.167352   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:29.167256   64648 retry.go:31] will retry after 390.036378ms: waiting for machine to come up
	I1101 01:20:29.558855   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:29.559354   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:29.559389   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:29.559285   64648 retry.go:31] will retry after 410.30945ms: waiting for machine to come up
	I1101 01:20:29.970656   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:29.971063   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:29.971101   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:29.971014   64648 retry.go:31] will retry after 545.455542ms: waiting for machine to come up
	I1101 01:20:30.517668   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:30.518337   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:30.518379   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:30.518285   64648 retry.go:31] will retry after 562.086808ms: waiting for machine to come up
	I1101 01:20:31.081578   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:31.082157   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:31.082205   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:31.082083   64648 retry.go:31] will retry after 744.834019ms: waiting for machine to come up
	I1101 01:20:31.829035   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:31.829593   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:31.829623   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:31.829545   64648 retry.go:31] will retry after 1.124156549s: waiting for machine to come up
	I1101 01:20:32.955229   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:32.955754   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:32.955776   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:32.955707   64648 retry.go:31] will retry after 945.262883ms: waiting for machine to come up
	I1101 01:20:33.903162   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:33.903604   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:33.903627   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:33.903574   64648 retry.go:31] will retry after 1.342633534s: waiting for machine to come up
	I1101 01:20:35.247780   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:35.248333   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:35.248370   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:35.248271   64648 retry.go:31] will retry after 1.717433966s: waiting for machine to come up
	I1101 01:20:36.967748   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:36.968301   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:36.968331   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:36.968243   64648 retry.go:31] will retry after 2.125257088s: waiting for machine to come up
	I1101 01:20:39.096241   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:39.096903   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:39.096930   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:39.096845   64648 retry.go:31] will retry after 3.120284679s: waiting for machine to come up
	I1101 01:20:42.218526   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:42.219010   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:42.219035   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:42.218966   64648 retry.go:31] will retry after 3.400004837s: waiting for machine to come up
	I1101 01:20:45.621833   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:45.622314   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find current IP address of domain newest-cni-816754 in network mk-newest-cni-816754
	I1101 01:20:45.622342   64625 main.go:141] libmachine: (newest-cni-816754) DBG | I1101 01:20:45.622255   64648 retry.go:31] will retry after 4.340884931s: waiting for machine to come up
	I1101 01:20:49.966397   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:49.966885   64625 main.go:141] libmachine: (newest-cni-816754) Found IP for machine: 192.168.39.148
	I1101 01:20:49.966944   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has current primary IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:49.966966   64625 main.go:141] libmachine: (newest-cni-816754) Reserving static IP address...
	I1101 01:20:49.967354   64625 main.go:141] libmachine: (newest-cni-816754) DBG | unable to find host DHCP lease matching {name: "newest-cni-816754", mac: "52:54:00:e9:10:53", ip: "192.168.39.148"} in network mk-newest-cni-816754
	I1101 01:20:50.049507   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Getting to WaitForSSH function...
	I1101 01:20:50.049555   64625 main.go:141] libmachine: (newest-cni-816754) Reserved static IP address: 192.168.39.148
	I1101 01:20:50.049575   64625 main.go:141] libmachine: (newest-cni-816754) Waiting for SSH to be available...
	I1101 01:20:50.052593   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.053018   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:50.053066   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.053156   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Using SSH client type: external
	I1101 01:20:50.053178   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754/id_rsa (-rw-------)
	I1101 01:20:50.053215   64625 main.go:141] libmachine: (newest-cni-816754) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.148 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1101 01:20:50.053237   64625 main.go:141] libmachine: (newest-cni-816754) DBG | About to run SSH command:
	I1101 01:20:50.053250   64625 main.go:141] libmachine: (newest-cni-816754) DBG | exit 0
	I1101 01:20:50.143882   64625 main.go:141] libmachine: (newest-cni-816754) DBG | SSH cmd err, output: <nil>: 
	I1101 01:20:50.144170   64625 main.go:141] libmachine: (newest-cni-816754) KVM machine creation complete!
	I1101 01:20:50.144668   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetConfigRaw
	I1101 01:20:50.145249   64625 main.go:141] libmachine: (newest-cni-816754) Calling .DriverName
	I1101 01:20:50.145481   64625 main.go:141] libmachine: (newest-cni-816754) Calling .DriverName
	I1101 01:20:50.145669   64625 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1101 01:20:50.145685   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetState
	I1101 01:20:50.147056   64625 main.go:141] libmachine: Detecting operating system of created instance...
	I1101 01:20:50.147070   64625 main.go:141] libmachine: Waiting for SSH to be available...
	I1101 01:20:50.147077   64625 main.go:141] libmachine: Getting to WaitForSSH function...
	I1101 01:20:50.147083   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHHostname
	I1101 01:20:50.149699   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.150244   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:50.150269   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.150408   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHPort
	I1101 01:20:50.150591   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:50.150729   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:50.150859   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHUsername
	I1101 01:20:50.151028   64625 main.go:141] libmachine: Using SSH client type: native
	I1101 01:20:50.151445   64625 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I1101 01:20:50.151466   64625 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1101 01:20:50.267248   64625 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 01:20:50.267271   64625 main.go:141] libmachine: Detecting the provisioner...
	I1101 01:20:50.267280   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHHostname
	I1101 01:20:50.270067   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.270474   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:50.270509   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.270587   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHPort
	I1101 01:20:50.270746   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:50.270937   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:50.271089   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHUsername
	I1101 01:20:50.271265   64625 main.go:141] libmachine: Using SSH client type: native
	I1101 01:20:50.271607   64625 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I1101 01:20:50.271624   64625 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1101 01:20:50.388826   64625 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g0cee705-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1101 01:20:50.388908   64625 main.go:141] libmachine: found compatible host: buildroot
	I1101 01:20:50.388923   64625 main.go:141] libmachine: Provisioning with buildroot...
	I1101 01:20:50.388932   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetMachineName
	I1101 01:20:50.389214   64625 buildroot.go:166] provisioning hostname "newest-cni-816754"
	I1101 01:20:50.389241   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetMachineName
	I1101 01:20:50.389409   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHHostname
	I1101 01:20:50.392105   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.392490   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:50.392522   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.392627   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHPort
	I1101 01:20:50.392797   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:50.392983   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:50.393154   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHUsername
	I1101 01:20:50.393340   64625 main.go:141] libmachine: Using SSH client type: native
	I1101 01:20:50.393753   64625 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I1101 01:20:50.393771   64625 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-816754 && echo "newest-cni-816754" | sudo tee /etc/hostname
	I1101 01:20:50.524496   64625 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-816754
	
	I1101 01:20:50.524530   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHHostname
	I1101 01:20:50.527592   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.528072   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:50.528121   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.528367   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHPort
	I1101 01:20:50.528602   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:50.528794   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:50.529017   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHUsername
	I1101 01:20:50.529224   64625 main.go:141] libmachine: Using SSH client type: native
	I1101 01:20:50.529620   64625 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I1101 01:20:50.529646   64625 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-816754' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-816754/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-816754' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 01:20:50.657526   64625 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 01:20:50.657563   64625 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7305/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7305/.minikube}
	I1101 01:20:50.657588   64625 buildroot.go:174] setting up certificates
	I1101 01:20:50.657599   64625 provision.go:83] configureAuth start
	I1101 01:20:50.657618   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetMachineName
	I1101 01:20:50.657946   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetIP
	I1101 01:20:50.660675   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.660941   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:50.660970   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.661118   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHHostname
	I1101 01:20:50.663458   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.663801   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:50.663833   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:50.664020   64625 provision.go:138] copyHostCerts
	I1101 01:20:50.664082   64625 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem, removing ...
	I1101 01:20:50.664104   64625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 01:20:50.664183   64625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem (1078 bytes)
	I1101 01:20:50.664312   64625 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem, removing ...
	I1101 01:20:50.664323   64625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 01:20:50.664359   64625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem (1123 bytes)
	I1101 01:20:50.664466   64625 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem, removing ...
	I1101 01:20:50.664480   64625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 01:20:50.664525   64625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem (1675 bytes)
	I1101 01:20:50.664577   64625 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem org=jenkins.newest-cni-816754 san=[192.168.39.148 192.168.39.148 localhost 127.0.0.1 minikube newest-cni-816754]
	I1101 01:20:51.005619   64625 provision.go:172] copyRemoteCerts
	I1101 01:20:51.005678   64625 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 01:20:51.005708   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHHostname
	I1101 01:20:51.008884   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.009323   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:51.009359   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.009521   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHPort
	I1101 01:20:51.009749   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:51.009919   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHUsername
	I1101 01:20:51.010066   64625 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754/id_rsa Username:docker}
	I1101 01:20:51.101132   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 01:20:51.125444   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 01:20:51.148653   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1101 01:20:51.172340   64625 provision.go:86] duration metric: configureAuth took 514.718541ms
	I1101 01:20:51.172363   64625 buildroot.go:189] setting minikube options for container-runtime
	I1101 01:20:51.172712   64625 config.go:182] Loaded profile config "newest-cni-816754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:20:51.172819   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHHostname
	I1101 01:20:51.176208   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.176684   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:51.176721   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.176935   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHPort
	I1101 01:20:51.177176   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:51.177359   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:51.177516   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHUsername
	I1101 01:20:51.177725   64625 main.go:141] libmachine: Using SSH client type: native
	I1101 01:20:51.178105   64625 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I1101 01:20:51.178127   64625 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 01:20:51.483523   64625 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 01:20:51.483557   64625 main.go:141] libmachine: Checking connection to Docker...
	I1101 01:20:51.483589   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetURL
	I1101 01:20:51.484996   64625 main.go:141] libmachine: (newest-cni-816754) DBG | Using libvirt version 6000000
	I1101 01:20:51.487079   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.487445   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:51.487479   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.487667   64625 main.go:141] libmachine: Docker is up and running!
	I1101 01:20:51.487682   64625 main.go:141] libmachine: Reticulating splines...
	I1101 01:20:51.487690   64625 client.go:171] LocalClient.Create took 24.485974012s
	I1101 01:20:51.487718   64625 start.go:167] duration metric: libmachine.API.Create for "newest-cni-816754" took 24.486056341s
	I1101 01:20:51.487735   64625 start.go:300] post-start starting for "newest-cni-816754" (driver="kvm2")
	I1101 01:20:51.487751   64625 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 01:20:51.487775   64625 main.go:141] libmachine: (newest-cni-816754) Calling .DriverName
	I1101 01:20:51.488081   64625 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 01:20:51.488105   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHHostname
	I1101 01:20:51.490270   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.490622   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:51.490644   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.490743   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHPort
	I1101 01:20:51.490946   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:51.491112   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHUsername
	I1101 01:20:51.491250   64625 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754/id_rsa Username:docker}
	I1101 01:20:51.577564   64625 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 01:20:51.582421   64625 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 01:20:51.582451   64625 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/addons for local assets ...
	I1101 01:20:51.582522   64625 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/files for local assets ...
	I1101 01:20:51.582624   64625 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> 145042.pem in /etc/ssl/certs
	I1101 01:20:51.582715   64625 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 01:20:51.591399   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:20:51.614218   64625 start.go:303] post-start completed in 126.467274ms
	I1101 01:20:51.614257   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetConfigRaw
	I1101 01:20:51.614794   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetIP
	I1101 01:20:51.617335   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.617843   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:51.617874   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.618197   64625 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/config.json ...
	I1101 01:20:51.618407   64625 start.go:128] duration metric: createHost completed in 24.636474986s
	I1101 01:20:51.618431   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHHostname
	I1101 01:20:51.621008   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.621378   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:51.621410   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.621543   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHPort
	I1101 01:20:51.621766   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:51.621964   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:51.622142   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHUsername
	I1101 01:20:51.622347   64625 main.go:141] libmachine: Using SSH client type: native
	I1101 01:20:51.622677   64625 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I1101 01:20:51.622691   64625 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1101 01:20:51.740752   64625 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698801651.712516909
	
	I1101 01:20:51.740784   64625 fix.go:206] guest clock: 1698801651.712516909
	I1101 01:20:51.740793   64625 fix.go:219] Guest: 2023-11-01 01:20:51.712516909 +0000 UTC Remote: 2023-11-01 01:20:51.618418585 +0000 UTC m=+24.769563112 (delta=94.098324ms)
	I1101 01:20:51.740821   64625 fix.go:190] guest clock delta is within tolerance: 94.098324ms
	I1101 01:20:51.740830   64625 start.go:83] releasing machines lock for "newest-cni-816754", held for 24.759043949s
	I1101 01:20:51.740859   64625 main.go:141] libmachine: (newest-cni-816754) Calling .DriverName
	I1101 01:20:51.741149   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetIP
	I1101 01:20:51.743857   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.744261   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:51.744295   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.744431   64625 main.go:141] libmachine: (newest-cni-816754) Calling .DriverName
	I1101 01:20:51.744972   64625 main.go:141] libmachine: (newest-cni-816754) Calling .DriverName
	I1101 01:20:51.745171   64625 main.go:141] libmachine: (newest-cni-816754) Calling .DriverName
	I1101 01:20:51.745262   64625 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 01:20:51.745342   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHHostname
	I1101 01:20:51.745424   64625 ssh_runner.go:195] Run: cat /version.json
	I1101 01:20:51.745451   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHHostname
	I1101 01:20:51.748196   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.748256   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.748573   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:51.748610   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.748646   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:51.748669   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:51.748736   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHPort
	I1101 01:20:51.748834   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHPort
	I1101 01:20:51.748922   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:51.748981   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHKeyPath
	I1101 01:20:51.749071   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHUsername
	I1101 01:20:51.749139   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetSSHUsername
	I1101 01:20:51.749206   64625 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754/id_rsa Username:docker}
	I1101 01:20:51.749252   64625 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/newest-cni-816754/id_rsa Username:docker}
	I1101 01:20:51.867866   64625 ssh_runner.go:195] Run: systemctl --version
	I1101 01:20:51.873719   64625 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 01:20:52.035278   64625 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 01:20:52.041189   64625 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 01:20:52.041272   64625 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 01:20:52.056896   64625 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 01:20:52.056923   64625 start.go:472] detecting cgroup driver to use...
	I1101 01:20:52.056988   64625 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 01:20:52.070654   64625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 01:20:52.083147   64625 docker.go:204] disabling cri-docker service (if available) ...
	I1101 01:20:52.083220   64625 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 01:20:52.095798   64625 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 01:20:52.108444   64625 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 01:20:52.224741   64625 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 01:20:52.347981   64625 docker.go:220] disabling docker service ...
	I1101 01:20:52.348055   64625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 01:20:52.361496   64625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 01:20:52.373422   64625 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 01:20:52.491564   64625 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 01:20:52.602648   64625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 01:20:52.614781   64625 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 01:20:52.632225   64625 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 01:20:52.632289   64625 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:20:52.642604   64625 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 01:20:52.642661   64625 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:20:52.652069   64625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:20:52.661998   64625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:20:52.671552   64625 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 01:20:52.682867   64625 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 01:20:52.691926   64625 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 01:20:52.692008   64625 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 01:20:52.704079   64625 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 01:20:52.713491   64625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 01:20:52.839278   64625 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 01:20:53.013907   64625 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 01:20:53.013976   64625 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 01:20:53.019384   64625 start.go:540] Will wait 60s for crictl version
	I1101 01:20:53.019445   64625 ssh_runner.go:195] Run: which crictl
	I1101 01:20:53.023197   64625 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 01:20:53.061965   64625 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1101 01:20:53.062085   64625 ssh_runner.go:195] Run: crio --version
	I1101 01:20:53.108002   64625 ssh_runner.go:195] Run: crio --version
	I1101 01:20:53.158652   64625 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1101 01:20:53.160356   64625 main.go:141] libmachine: (newest-cni-816754) Calling .GetIP
	I1101 01:20:53.163312   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:53.163744   64625 main.go:141] libmachine: (newest-cni-816754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:10:53", ip: ""} in network mk-newest-cni-816754: {Iface:virbr1 ExpiryTime:2023-11-01 02:20:42 +0000 UTC Type:0 Mac:52:54:00:e9:10:53 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:newest-cni-816754 Clientid:01:52:54:00:e9:10:53}
	I1101 01:20:53.163790   64625 main.go:141] libmachine: (newest-cni-816754) DBG | domain newest-cni-816754 has defined IP address 192.168.39.148 and MAC address 52:54:00:e9:10:53 in network mk-newest-cni-816754
	I1101 01:20:53.164054   64625 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1101 01:20:53.168313   64625 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:20:53.181782   64625 localpath.go:92] copying /home/jenkins/minikube-integration/17486-7305/.minikube/client.crt -> /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/client.crt
	I1101 01:20:53.181942   64625 localpath.go:117] copying /home/jenkins/minikube-integration/17486-7305/.minikube/client.key -> /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/client.key
	I1101 01:20:53.184256   64625 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1101 01:20:53.185935   64625 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 01:20:53.186019   64625 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:20:53.221394   64625 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1101 01:20:53.221464   64625 ssh_runner.go:195] Run: which lz4
	I1101 01:20:53.225430   64625 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1101 01:20:53.229572   64625 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 01:20:53.229619   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1101 01:20:55.078513   64625 crio.go:444] Took 1.853113 seconds to copy over tarball
	I1101 01:20:55.078590   64625 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 01:20:58.039163   64625 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.960551837s)
	I1101 01:20:58.039190   64625 crio.go:451] Took 2.960647 seconds to extract the tarball
	I1101 01:20:58.039201   64625 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 01:20:58.082032   64625 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:20:58.154466   64625 crio.go:496] all images are preloaded for cri-o runtime.
	I1101 01:20:58.154490   64625 cache_images.go:84] Images are preloaded, skipping loading
	I1101 01:20:58.154555   64625 ssh_runner.go:195] Run: crio config
	I1101 01:20:58.229389   64625 cni.go:84] Creating CNI manager for ""
	I1101 01:20:58.229423   64625 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:20:58.229447   64625 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I1101 01:20:58.229485   64625 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.148 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-816754 NodeName:newest-cni-816754 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.148"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.39.148 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 01:20:58.229684   64625 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.148
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-816754"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.148
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.148"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 01:20:58.229811   64625 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-816754 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.148
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:newest-cni-816754 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 01:20:58.229888   64625 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 01:20:58.242338   64625 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 01:20:58.242413   64625 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 01:20:58.252625   64625 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (414 bytes)
	I1101 01:20:58.271959   64625 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 01:20:58.290584   64625 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1101 01:20:58.309005   64625 ssh_runner.go:195] Run: grep 192.168.39.148	control-plane.minikube.internal$ /etc/hosts
	I1101 01:20:58.313280   64625 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.148	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:20:58.325979   64625 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754 for IP: 192.168.39.148
	I1101 01:20:58.326024   64625 certs.go:190] acquiring lock for shared ca certs: {Name:mk6185b3938c4b51e80f57b7c81926c81b632b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:20:58.326204   64625 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key
	I1101 01:20:58.326246   64625 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key
	I1101 01:20:58.326329   64625 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/client.key
	I1101 01:20:58.326352   64625 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/apiserver.key.b8daa033
	I1101 01:20:58.326362   64625 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/apiserver.crt.b8daa033 with IP's: [192.168.39.148 10.96.0.1 127.0.0.1 10.0.0.1]
	I1101 01:20:58.427110   64625 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/apiserver.crt.b8daa033 ...
	I1101 01:20:58.427140   64625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/apiserver.crt.b8daa033: {Name:mk3f8c141290c3a65392487e79efcc8078b29342 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:20:58.427342   64625 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/apiserver.key.b8daa033 ...
	I1101 01:20:58.427358   64625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/apiserver.key.b8daa033: {Name:mk51785e712809e4c053079f222fcaf26d1cb6b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:20:58.427483   64625 certs.go:337] copying /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/apiserver.crt.b8daa033 -> /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/apiserver.crt
	I1101 01:20:58.427575   64625 certs.go:341] copying /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/apiserver.key.b8daa033 -> /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/apiserver.key
	I1101 01:20:58.427646   64625 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/proxy-client.key
	I1101 01:20:58.427668   64625 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/proxy-client.crt with IP's: []
	I1101 01:20:58.706887   64625 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/proxy-client.crt ...
	I1101 01:20:58.706917   64625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/proxy-client.crt: {Name:mkba323c47c990603b5078f2d8326413583ed649 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:20:58.707094   64625 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/proxy-client.key ...
	I1101 01:20:58.707115   64625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/proxy-client.key: {Name:mk640a807fa21c954ca16b3fd0849059bca2a284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:20:58.707364   64625 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem (1338 bytes)
	W1101 01:20:58.707404   64625 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504_empty.pem, impossibly tiny 0 bytes
	I1101 01:20:58.707415   64625 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 01:20:58.707435   64625 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem (1078 bytes)
	I1101 01:20:58.707464   64625 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem (1123 bytes)
	I1101 01:20:58.707485   64625 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem (1675 bytes)
	I1101 01:20:58.707533   64625 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:20:58.708143   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 01:20:58.734111   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 01:20:58.762510   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 01:20:58.788616   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/newest-cni-816754/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 01:20:58.813676   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 01:20:58.839628   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 01:20:58.865730   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 01:20:58.892226   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 01:20:58.917779   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem --> /usr/share/ca-certificates/14504.pem (1338 bytes)
	I1101 01:20:58.942571   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /usr/share/ca-certificates/145042.pem (1708 bytes)
	I1101 01:20:58.965958   64625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 01:20:58.990159   64625 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 01:20:59.008835   64625 ssh_runner.go:195] Run: openssl version
	I1101 01:20:59.015199   64625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14504.pem && ln -fs /usr/share/ca-certificates/14504.pem /etc/ssl/certs/14504.pem"
	I1101 01:20:59.025730   64625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14504.pem
	I1101 01:20:59.030514   64625 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:54 /usr/share/ca-certificates/14504.pem
	I1101 01:20:59.030585   64625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14504.pem
	I1101 01:20:59.036535   64625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14504.pem /etc/ssl/certs/51391683.0"
	I1101 01:20:59.047853   64625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145042.pem && ln -fs /usr/share/ca-certificates/145042.pem /etc/ssl/certs/145042.pem"
	I1101 01:20:59.058620   64625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145042.pem
	I1101 01:20:59.063296   64625 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:54 /usr/share/ca-certificates/145042.pem
	I1101 01:20:59.063369   64625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145042.pem
	I1101 01:20:59.069054   64625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145042.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 01:20:59.078653   64625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 01:20:59.089081   64625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:20:59.094030   64625 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:20:59.094097   64625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:20:59.099890   64625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 01:20:59.110687   64625 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 01:20:59.115284   64625 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1101 01:20:59.115337   64625 kubeadm.go:404] StartCluster: {Name:newest-cni-816754 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:newest-cni-816754 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.148 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mini
kube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 01:20:59.115439   64625 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 01:20:59.115505   64625 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:20:59.159819   64625 cri.go:89] found id: ""
	I1101 01:20:59.159974   64625 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 01:20:59.169450   64625 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:20:59.178882   64625 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:20:59.188155   64625 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:20:59.188209   64625 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1101 01:20:59.590963   64625 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-11-01 01:01:08 UTC, ends at Wed 2023-11-01 01:21:05 UTC. --
	Nov 01 01:21:04 no-preload-008483 crio[709]: time="2023-11-01 01:21:04.894629622Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698801664894617264,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=a3df3b32-d939-4172-8116-d20e315b6106 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:21:04 no-preload-008483 crio[709]: time="2023-11-01 01:21:04.895513380Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2f660b6b-1953-4adc-ad1f-0f401cfa0bbb name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:21:04 no-preload-008483 crio[709]: time="2023-11-01 01:21:04.895563666Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2f660b6b-1953-4adc-ad1f-0f401cfa0bbb name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:21:04 no-preload-008483 crio[709]: time="2023-11-01 01:21:04.895763085Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e91e26d41e22a636f3966fdb1dd6db999eae4ea6cff3e1290036854c8960f051,PodSandboxId:195d8157304c1005ac61e4f188e7c5240de832d9e80aff752fbf253770b0622a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1698800811030815361,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 909163da-9021-4cee-9a72-1bc9b6ae9390,},Annotations:map[string]string{io.kubernetes.container.hash: 1e44b7d8,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edd94ede27bbbe8cfe0647252d6bed169e64b894a76d5a29893d784dc05f519b,PodSandboxId:1e577e533c6773fe74f90f9960a1e296e7b3d9f2168345a6deecf8dbe94cb97c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1698800811087201369,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4cx5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57c1e87a-aa14-440d-9001-a6ba2ab7c8c6,},Annotations:map[string]string{io.kubernetes.container.hash: 510a8192,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73703bcd827a65c00d49d0b850c3eae382a733d0d82a35a7b6f0540825dcf58,PodSandboxId:b3d442321510a7263cd825e67380d31225427312783addaa6b0e07c26484866d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1698800810274125365,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-m8v7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 351a9458-075b-40d1-96d1-86a450a99251,},Annotations:map[string]string{io.kubernetes.container.hash: 82d873be,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1ba3a1083dd2dcb1278523bde1f5387fb968eeba4196562c8bf480c69743a4a,PodSandboxId:3c971c491c8b9e730b6fd26723ec0ca29ef412e4df345a6cbca3317e6bdb84b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1698800787679889858,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-008483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8aa05c6e537fd3a0f101e32fb442ce36,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ae2965afd64df8be7fbc531d17512e8c69ea84d779fdb1bb8dda8a305cbc0ff,PodSandboxId:8f18c30c727e68b06fec8778b482096e772177c5747ac86ff5da1828206108ca,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1698800787610776578,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-008483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3605feb9b1e84ca198f01f1457eb52,},Annotations:map
[string]string{io.kubernetes.container.hash: ce0a95cd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b382513a898be97c48e1ae6d9ba0083e519d059d2e5161e8d91c119e828b9535,PodSandboxId:7512f2f28d7dd29242b39028586b60a61c9522a2e810f28949a5174bb67230a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1698800787440816689,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-008483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e5044dcdf76b056f4fa816fd
0dda7c1,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4de02e7339a911abc7905e3d5b90216f4a37571e0ffcb1411f51374a244ef3fe,PodSandboxId:7b470162184f3df0fb98378ca579fceaaf24b754964960b2a9ff1d127612a437,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1698800787366164936,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-008483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ebfcbba23e72624e12f49fd78f84e46,},A
nnotations:map[string]string{io.kubernetes.container.hash: 7d6e5ab1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2f660b6b-1953-4adc-ad1f-0f401cfa0bbb name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:21:04 no-preload-008483 crio[709]: time="2023-11-01 01:21:04.941623335Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=8263ff85-2932-49d5-85ff-64e5c9ed8591 name=/runtime.v1.RuntimeService/Version
	Nov 01 01:21:04 no-preload-008483 crio[709]: time="2023-11-01 01:21:04.941709111Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=8263ff85-2932-49d5-85ff-64e5c9ed8591 name=/runtime.v1.RuntimeService/Version
	Nov 01 01:21:04 no-preload-008483 crio[709]: time="2023-11-01 01:21:04.943390802Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=1d0fcc4d-ce69-4689-adb0-712750e9804b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:21:04 no-preload-008483 crio[709]: time="2023-11-01 01:21:04.943897123Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698801664943879416,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=1d0fcc4d-ce69-4689-adb0-712750e9804b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:21:04 no-preload-008483 crio[709]: time="2023-11-01 01:21:04.945116023Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=655aa6a8-25bc-4f29-a6eb-b1dd59350658 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:21:04 no-preload-008483 crio[709]: time="2023-11-01 01:21:04.945240440Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=655aa6a8-25bc-4f29-a6eb-b1dd59350658 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:21:04 no-preload-008483 crio[709]: time="2023-11-01 01:21:04.947325229Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e91e26d41e22a636f3966fdb1dd6db999eae4ea6cff3e1290036854c8960f051,PodSandboxId:195d8157304c1005ac61e4f188e7c5240de832d9e80aff752fbf253770b0622a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1698800811030815361,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 909163da-9021-4cee-9a72-1bc9b6ae9390,},Annotations:map[string]string{io.kubernetes.container.hash: 1e44b7d8,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edd94ede27bbbe8cfe0647252d6bed169e64b894a76d5a29893d784dc05f519b,PodSandboxId:1e577e533c6773fe74f90f9960a1e296e7b3d9f2168345a6deecf8dbe94cb97c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1698800811087201369,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4cx5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57c1e87a-aa14-440d-9001-a6ba2ab7c8c6,},Annotations:map[string]string{io.kubernetes.container.hash: 510a8192,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73703bcd827a65c00d49d0b850c3eae382a733d0d82a35a7b6f0540825dcf58,PodSandboxId:b3d442321510a7263cd825e67380d31225427312783addaa6b0e07c26484866d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1698800810274125365,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-m8v7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 351a9458-075b-40d1-96d1-86a450a99251,},Annotations:map[string]string{io.kubernetes.container.hash: 82d873be,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1ba3a1083dd2dcb1278523bde1f5387fb968eeba4196562c8bf480c69743a4a,PodSandboxId:3c971c491c8b9e730b6fd26723ec0ca29ef412e4df345a6cbca3317e6bdb84b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1698800787679889858,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-008483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8aa05c6e537fd3a0f101e32fb442ce36,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ae2965afd64df8be7fbc531d17512e8c69ea84d779fdb1bb8dda8a305cbc0ff,PodSandboxId:8f18c30c727e68b06fec8778b482096e772177c5747ac86ff5da1828206108ca,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1698800787610776578,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-008483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3605feb9b1e84ca198f01f1457eb52,},Annotations:map
[string]string{io.kubernetes.container.hash: ce0a95cd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b382513a898be97c48e1ae6d9ba0083e519d059d2e5161e8d91c119e828b9535,PodSandboxId:7512f2f28d7dd29242b39028586b60a61c9522a2e810f28949a5174bb67230a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1698800787440816689,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-008483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e5044dcdf76b056f4fa816fd
0dda7c1,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4de02e7339a911abc7905e3d5b90216f4a37571e0ffcb1411f51374a244ef3fe,PodSandboxId:7b470162184f3df0fb98378ca579fceaaf24b754964960b2a9ff1d127612a437,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1698800787366164936,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-008483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ebfcbba23e72624e12f49fd78f84e46,},A
nnotations:map[string]string{io.kubernetes.container.hash: 7d6e5ab1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=655aa6a8-25bc-4f29-a6eb-b1dd59350658 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:21:05 no-preload-008483 crio[709]: time="2023-11-01 01:21:04.999819036Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=9908c1e1-e761-4c47-8dcd-92d945d897e2 name=/runtime.v1.RuntimeService/Version
	Nov 01 01:21:05 no-preload-008483 crio[709]: time="2023-11-01 01:21:04.999901636Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=9908c1e1-e761-4c47-8dcd-92d945d897e2 name=/runtime.v1.RuntimeService/Version
	Nov 01 01:21:05 no-preload-008483 crio[709]: time="2023-11-01 01:21:05.003600319Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d469eec1-c05f-4936-954c-955ab0cc3a6c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:21:05 no-preload-008483 crio[709]: time="2023-11-01 01:21:05.004264863Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698801665004241904,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=d469eec1-c05f-4936-954c-955ab0cc3a6c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:21:05 no-preload-008483 crio[709]: time="2023-11-01 01:21:05.005484913Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a4b04660-1dd9-480e-9ac0-87e17f7e76ca name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:21:05 no-preload-008483 crio[709]: time="2023-11-01 01:21:05.005703460Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a4b04660-1dd9-480e-9ac0-87e17f7e76ca name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:21:05 no-preload-008483 crio[709]: time="2023-11-01 01:21:05.006105985Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e91e26d41e22a636f3966fdb1dd6db999eae4ea6cff3e1290036854c8960f051,PodSandboxId:195d8157304c1005ac61e4f188e7c5240de832d9e80aff752fbf253770b0622a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1698800811030815361,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 909163da-9021-4cee-9a72-1bc9b6ae9390,},Annotations:map[string]string{io.kubernetes.container.hash: 1e44b7d8,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edd94ede27bbbe8cfe0647252d6bed169e64b894a76d5a29893d784dc05f519b,PodSandboxId:1e577e533c6773fe74f90f9960a1e296e7b3d9f2168345a6deecf8dbe94cb97c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1698800811087201369,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4cx5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57c1e87a-aa14-440d-9001-a6ba2ab7c8c6,},Annotations:map[string]string{io.kubernetes.container.hash: 510a8192,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73703bcd827a65c00d49d0b850c3eae382a733d0d82a35a7b6f0540825dcf58,PodSandboxId:b3d442321510a7263cd825e67380d31225427312783addaa6b0e07c26484866d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1698800810274125365,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-m8v7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 351a9458-075b-40d1-96d1-86a450a99251,},Annotations:map[string]string{io.kubernetes.container.hash: 82d873be,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1ba3a1083dd2dcb1278523bde1f5387fb968eeba4196562c8bf480c69743a4a,PodSandboxId:3c971c491c8b9e730b6fd26723ec0ca29ef412e4df345a6cbca3317e6bdb84b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1698800787679889858,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-008483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8aa05c6e537fd3a0f101e32fb442ce36,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ae2965afd64df8be7fbc531d17512e8c69ea84d779fdb1bb8dda8a305cbc0ff,PodSandboxId:8f18c30c727e68b06fec8778b482096e772177c5747ac86ff5da1828206108ca,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1698800787610776578,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-008483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3605feb9b1e84ca198f01f1457eb52,},Annotations:map
[string]string{io.kubernetes.container.hash: ce0a95cd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b382513a898be97c48e1ae6d9ba0083e519d059d2e5161e8d91c119e828b9535,PodSandboxId:7512f2f28d7dd29242b39028586b60a61c9522a2e810f28949a5174bb67230a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1698800787440816689,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-008483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e5044dcdf76b056f4fa816fd
0dda7c1,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4de02e7339a911abc7905e3d5b90216f4a37571e0ffcb1411f51374a244ef3fe,PodSandboxId:7b470162184f3df0fb98378ca579fceaaf24b754964960b2a9ff1d127612a437,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1698800787366164936,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-008483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ebfcbba23e72624e12f49fd78f84e46,},A
nnotations:map[string]string{io.kubernetes.container.hash: 7d6e5ab1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a4b04660-1dd9-480e-9ac0-87e17f7e76ca name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:21:05 no-preload-008483 crio[709]: time="2023-11-01 01:21:05.050034590Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=8296c861-f6a7-4609-8223-07aef54b1f4f name=/runtime.v1.RuntimeService/Version
	Nov 01 01:21:05 no-preload-008483 crio[709]: time="2023-11-01 01:21:05.050117630Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=8296c861-f6a7-4609-8223-07aef54b1f4f name=/runtime.v1.RuntimeService/Version
	Nov 01 01:21:05 no-preload-008483 crio[709]: time="2023-11-01 01:21:05.051900414Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=9d59bb1f-0cd0-46dc-a1e6-2fcb2bd80ced name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:21:05 no-preload-008483 crio[709]: time="2023-11-01 01:21:05.052320183Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698801665052300740,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=9d59bb1f-0cd0-46dc-a1e6-2fcb2bd80ced name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:21:05 no-preload-008483 crio[709]: time="2023-11-01 01:21:05.053359561Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=910e08b8-fba0-4604-af42-5c8f82647e42 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:21:05 no-preload-008483 crio[709]: time="2023-11-01 01:21:05.053433293Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=910e08b8-fba0-4604-af42-5c8f82647e42 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:21:05 no-preload-008483 crio[709]: time="2023-11-01 01:21:05.053953112Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e91e26d41e22a636f3966fdb1dd6db999eae4ea6cff3e1290036854c8960f051,PodSandboxId:195d8157304c1005ac61e4f188e7c5240de832d9e80aff752fbf253770b0622a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1698800811030815361,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 909163da-9021-4cee-9a72-1bc9b6ae9390,},Annotations:map[string]string{io.kubernetes.container.hash: 1e44b7d8,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edd94ede27bbbe8cfe0647252d6bed169e64b894a76d5a29893d784dc05f519b,PodSandboxId:1e577e533c6773fe74f90f9960a1e296e7b3d9f2168345a6deecf8dbe94cb97c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1698800811087201369,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4cx5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57c1e87a-aa14-440d-9001-a6ba2ab7c8c6,},Annotations:map[string]string{io.kubernetes.container.hash: 510a8192,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73703bcd827a65c00d49d0b850c3eae382a733d0d82a35a7b6f0540825dcf58,PodSandboxId:b3d442321510a7263cd825e67380d31225427312783addaa6b0e07c26484866d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1698800810274125365,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-m8v7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 351a9458-075b-40d1-96d1-86a450a99251,},Annotations:map[string]string{io.kubernetes.container.hash: 82d873be,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1ba3a1083dd2dcb1278523bde1f5387fb968eeba4196562c8bf480c69743a4a,PodSandboxId:3c971c491c8b9e730b6fd26723ec0ca29ef412e4df345a6cbca3317e6bdb84b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1698800787679889858,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-008483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8aa05c6e537fd3a0f101e32fb442ce36,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ae2965afd64df8be7fbc531d17512e8c69ea84d779fdb1bb8dda8a305cbc0ff,PodSandboxId:8f18c30c727e68b06fec8778b482096e772177c5747ac86ff5da1828206108ca,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1698800787610776578,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-008483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3605feb9b1e84ca198f01f1457eb52,},Annotations:map
[string]string{io.kubernetes.container.hash: ce0a95cd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b382513a898be97c48e1ae6d9ba0083e519d059d2e5161e8d91c119e828b9535,PodSandboxId:7512f2f28d7dd29242b39028586b60a61c9522a2e810f28949a5174bb67230a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1698800787440816689,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-008483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e5044dcdf76b056f4fa816fd
0dda7c1,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4de02e7339a911abc7905e3d5b90216f4a37571e0ffcb1411f51374a244ef3fe,PodSandboxId:7b470162184f3df0fb98378ca579fceaaf24b754964960b2a9ff1d127612a437,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1698800787366164936,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-008483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ebfcbba23e72624e12f49fd78f84e46,},A
nnotations:map[string]string{io.kubernetes.container.hash: 7d6e5ab1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=910e08b8-fba0-4604-af42-5c8f82647e42 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	edd94ede27bbb       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   14 minutes ago      Running             kube-proxy                0                   1e577e533c677       kube-proxy-4cx5t
	e91e26d41e22a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   195d8157304c1       storage-provisioner
	d73703bcd827a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   14 minutes ago      Running             coredns                   0                   b3d442321510a       coredns-5dd5756b68-m8v7v
	c1ba3a1083dd2       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   14 minutes ago      Running             kube-scheduler            2                   3c971c491c8b9       kube-scheduler-no-preload-008483
	0ae2965afd64d       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   14 minutes ago      Running             etcd                      2                   8f18c30c727e6       etcd-no-preload-008483
	b382513a898be       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   14 minutes ago      Running             kube-controller-manager   2                   7512f2f28d7dd       kube-controller-manager-no-preload-008483
	4de02e7339a91       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   14 minutes ago      Running             kube-apiserver            2                   7b470162184f3       kube-apiserver-no-preload-008483
	
	* 
	* ==> coredns [d73703bcd827a65c00d49d0b850c3eae382a733d0d82a35a7b6f0540825dcf58] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-008483
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-008483
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9
	                    minikube.k8s.io/name=no-preload-008483
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_01T01_06_35_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Nov 2023 01:06:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-008483
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Nov 2023 01:21:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Nov 2023 01:17:07 +0000   Wed, 01 Nov 2023 01:06:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Nov 2023 01:17:07 +0000   Wed, 01 Nov 2023 01:06:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Nov 2023 01:17:07 +0000   Wed, 01 Nov 2023 01:06:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Nov 2023 01:17:07 +0000   Wed, 01 Nov 2023 01:06:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.140
	  Hostname:    no-preload-008483
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 52a9be0c57874f02a466e826841cfdf7
	  System UUID:                52a9be0c-5787-4f02-a466-e826841cfdf7
	  Boot ID:                    b0555844-9b75-4cdf-be6c-0809731b47c2
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-m8v7v                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-no-preload-008483                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-no-preload-008483             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-no-preload-008483    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-4cx5t                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-no-preload-008483             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-57f55c9bc5-qcxt7              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node no-preload-008483 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node no-preload-008483 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node no-preload-008483 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             14m   kubelet          Node no-preload-008483 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                14m   kubelet          Node no-preload-008483 status is now: NodeReady
	  Normal  RegisteredNode           14m   node-controller  Node no-preload-008483 event: Registered Node no-preload-008483 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov 1 01:01] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068695] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.795198] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.991614] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.142829] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.737636] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.814081] systemd-fstab-generator[633]: Ignoring "noauto" for root device
	[  +0.131430] systemd-fstab-generator[644]: Ignoring "noauto" for root device
	[  +0.154693] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.110957] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.223749] systemd-fstab-generator[693]: Ignoring "noauto" for root device
	[ +31.732735] systemd-fstab-generator[1268]: Ignoring "noauto" for root device
	[Nov 1 01:02] kauditd_printk_skb: 29 callbacks suppressed
	[Nov 1 01:06] systemd-fstab-generator[3871]: Ignoring "noauto" for root device
	[  +9.293562] systemd-fstab-generator[4197]: Ignoring "noauto" for root device
	[ +13.745901] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [0ae2965afd64df8be7fbc531d17512e8c69ea84d779fdb1bb8dda8a305cbc0ff] <==
	* {"level":"info","ts":"2023-11-01T01:06:29.206469Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-01T01:06:29.206631Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.140:2380"}
	{"level":"info","ts":"2023-11-01T01:06:29.206698Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.140:2380"}
	{"level":"info","ts":"2023-11-01T01:06:29.726842Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"85ea5ca067fb3fe3 is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-01T01:06:29.726997Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"85ea5ca067fb3fe3 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-01T01:06:29.727068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"85ea5ca067fb3fe3 received MsgPreVoteResp from 85ea5ca067fb3fe3 at term 1"}
	{"level":"info","ts":"2023-11-01T01:06:29.72709Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"85ea5ca067fb3fe3 became candidate at term 2"}
	{"level":"info","ts":"2023-11-01T01:06:29.727099Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"85ea5ca067fb3fe3 received MsgVoteResp from 85ea5ca067fb3fe3 at term 2"}
	{"level":"info","ts":"2023-11-01T01:06:29.727117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"85ea5ca067fb3fe3 became leader at term 2"}
	{"level":"info","ts":"2023-11-01T01:06:29.727137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 85ea5ca067fb3fe3 elected leader 85ea5ca067fb3fe3 at term 2"}
	{"level":"info","ts":"2023-11-01T01:06:29.730578Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"85ea5ca067fb3fe3","local-member-attributes":"{Name:no-preload-008483 ClientURLs:[https://192.168.50.140:2379]}","request-path":"/0/members/85ea5ca067fb3fe3/attributes","cluster-id":"77a8f052fa5fccd4","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-01T01:06:29.73072Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-01T01:06:29.731705Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-01T01:06:29.731816Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T01:06:29.732043Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-01T01:06:29.732846Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.140:2379"}
	{"level":"info","ts":"2023-11-01T01:06:29.733579Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-01T01:06:29.733594Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-01T01:06:29.736022Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"77a8f052fa5fccd4","local-member-id":"85ea5ca067fb3fe3","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T01:06:29.736265Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T01:06:29.740598Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T01:16:29.774201Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":678}
	{"level":"info","ts":"2023-11-01T01:16:29.777007Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":678,"took":"2.253542ms","hash":513519050}
	{"level":"info","ts":"2023-11-01T01:16:29.777119Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":513519050,"revision":678,"compact-revision":-1}
	{"level":"info","ts":"2023-11-01T01:21:00.672407Z","caller":"traceutil/trace.go:171","msg":"trace[1293005338] transaction","detail":"{read_only:false; response_revision:1141; number_of_response:1; }","duration":"210.671844ms","start":"2023-11-01T01:21:00.461617Z","end":"2023-11-01T01:21:00.672289Z","steps":["trace[1293005338] 'process raft request'  (duration: 210.525126ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  01:21:05 up 20 min,  0 users,  load average: 0.41, 0.24, 0.19
	Linux no-preload-008483 5.10.57 #1 SMP Tue Oct 31 22:14:31 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [4de02e7339a911abc7905e3d5b90216f4a37571e0ffcb1411f51374a244ef3fe] <==
	* W1101 01:16:32.608192       1 handler_proxy.go:93] no RequestInfo found in the context
	E1101 01:16:32.608286       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1101 01:16:32.608301       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1101 01:16:32.608384       1 handler_proxy.go:93] no RequestInfo found in the context
	E1101 01:16:32.608421       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1101 01:16:32.609597       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1101 01:17:31.439168       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1101 01:17:32.609078       1 handler_proxy.go:93] no RequestInfo found in the context
	E1101 01:17:32.609297       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1101 01:17:32.609343       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1101 01:17:32.610424       1 handler_proxy.go:93] no RequestInfo found in the context
	E1101 01:17:32.610568       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1101 01:17:32.610618       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1101 01:18:31.439211       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1101 01:19:31.439310       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1101 01:19:32.609995       1 handler_proxy.go:93] no RequestInfo found in the context
	E1101 01:19:32.610202       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1101 01:19:32.610278       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1101 01:19:32.611023       1 handler_proxy.go:93] no RequestInfo found in the context
	E1101 01:19:32.611075       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1101 01:19:32.612241       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1101 01:20:31.438905       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [b382513a898be97c48e1ae6d9ba0083e519d059d2e5161e8d91c119e828b9535] <==
	* I1101 01:15:17.945767       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:15:47.456807       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:15:47.956752       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:16:17.463090       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:16:17.966503       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:16:47.468554       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:16:47.975807       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:17:17.475568       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:17:17.985333       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1101 01:17:39.382121       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="278.735µs"
	E1101 01:17:47.482373       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:17:47.998829       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1101 01:17:52.380490       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="170.939µs"
	E1101 01:18:17.488859       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:18:18.008635       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:18:47.495337       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:18:48.018451       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:19:17.502405       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:19:18.027649       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:19:47.509427       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:19:48.037520       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:20:17.515871       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:20:18.046562       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1101 01:20:47.521752       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1101 01:20:48.057363       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [edd94ede27bbbe8cfe0647252d6bed169e64b894a76d5a29893d784dc05f519b] <==
	* I1101 01:06:51.425719       1 server_others.go:69] "Using iptables proxy"
	I1101 01:06:51.436786       1 node.go:141] Successfully retrieved node IP: 192.168.50.140
	I1101 01:06:51.479431       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1101 01:06:51.479537       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 01:06:51.483017       1 server_others.go:152] "Using iptables Proxier"
	I1101 01:06:51.483101       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 01:06:51.483400       1 server.go:846] "Version info" version="v1.28.3"
	I1101 01:06:51.483441       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 01:06:51.484746       1 config.go:188] "Starting service config controller"
	I1101 01:06:51.484827       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 01:06:51.484855       1 config.go:97] "Starting endpoint slice config controller"
	I1101 01:06:51.484859       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 01:06:51.487064       1 config.go:315] "Starting node config controller"
	I1101 01:06:51.487206       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 01:06:51.586033       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1101 01:06:51.590145       1 shared_informer.go:318] Caches are synced for service config
	I1101 01:06:51.590176       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [c1ba3a1083dd2dcb1278523bde1f5387fb968eeba4196562c8bf480c69743a4a] <==
	* W1101 01:06:31.662677       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1101 01:06:31.662737       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 01:06:31.662907       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1101 01:06:31.664014       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1101 01:06:31.664383       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1101 01:06:31.664446       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1101 01:06:31.664515       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1101 01:06:31.664594       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1101 01:06:31.664395       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1101 01:06:31.664665       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1101 01:06:32.488136       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1101 01:06:32.488233       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 01:06:32.505675       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1101 01:06:32.505728       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1101 01:06:32.532772       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1101 01:06:32.532797       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1101 01:06:32.710884       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1101 01:06:32.710992       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1101 01:06:32.728153       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1101 01:06:32.728398       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1101 01:06:32.755735       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1101 01:06:32.755783       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1101 01:06:32.881325       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1101 01:06:32.881391       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1101 01:06:34.952471       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-11-01 01:01:08 UTC, ends at Wed 2023-11-01 01:21:05 UTC. --
	Nov 01 01:18:17 no-preload-008483 kubelet[4204]: E1101 01:18:17.362602    4204 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qcxt7" podUID="bf444b92-dd54-43fc-a9a8-0e9000b562e3"
	Nov 01 01:18:31 no-preload-008483 kubelet[4204]: E1101 01:18:31.362892    4204 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qcxt7" podUID="bf444b92-dd54-43fc-a9a8-0e9000b562e3"
	Nov 01 01:18:35 no-preload-008483 kubelet[4204]: E1101 01:18:35.501613    4204 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 01 01:18:35 no-preload-008483 kubelet[4204]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 01 01:18:35 no-preload-008483 kubelet[4204]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 01 01:18:35 no-preload-008483 kubelet[4204]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 01 01:18:43 no-preload-008483 kubelet[4204]: E1101 01:18:43.362379    4204 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qcxt7" podUID="bf444b92-dd54-43fc-a9a8-0e9000b562e3"
	Nov 01 01:18:55 no-preload-008483 kubelet[4204]: E1101 01:18:55.363700    4204 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qcxt7" podUID="bf444b92-dd54-43fc-a9a8-0e9000b562e3"
	Nov 01 01:19:09 no-preload-008483 kubelet[4204]: E1101 01:19:09.364325    4204 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qcxt7" podUID="bf444b92-dd54-43fc-a9a8-0e9000b562e3"
	Nov 01 01:19:24 no-preload-008483 kubelet[4204]: E1101 01:19:24.363090    4204 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qcxt7" podUID="bf444b92-dd54-43fc-a9a8-0e9000b562e3"
	Nov 01 01:19:35 no-preload-008483 kubelet[4204]: E1101 01:19:35.506144    4204 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 01 01:19:35 no-preload-008483 kubelet[4204]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 01 01:19:35 no-preload-008483 kubelet[4204]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 01 01:19:35 no-preload-008483 kubelet[4204]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 01 01:19:37 no-preload-008483 kubelet[4204]: E1101 01:19:37.363177    4204 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qcxt7" podUID="bf444b92-dd54-43fc-a9a8-0e9000b562e3"
	Nov 01 01:19:50 no-preload-008483 kubelet[4204]: E1101 01:19:50.361862    4204 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qcxt7" podUID="bf444b92-dd54-43fc-a9a8-0e9000b562e3"
	Nov 01 01:20:03 no-preload-008483 kubelet[4204]: E1101 01:20:03.362733    4204 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qcxt7" podUID="bf444b92-dd54-43fc-a9a8-0e9000b562e3"
	Nov 01 01:20:14 no-preload-008483 kubelet[4204]: E1101 01:20:14.362716    4204 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qcxt7" podUID="bf444b92-dd54-43fc-a9a8-0e9000b562e3"
	Nov 01 01:20:25 no-preload-008483 kubelet[4204]: E1101 01:20:25.363565    4204 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qcxt7" podUID="bf444b92-dd54-43fc-a9a8-0e9000b562e3"
	Nov 01 01:20:35 no-preload-008483 kubelet[4204]: E1101 01:20:35.502391    4204 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 01 01:20:35 no-preload-008483 kubelet[4204]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 01 01:20:35 no-preload-008483 kubelet[4204]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 01 01:20:35 no-preload-008483 kubelet[4204]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 01 01:20:39 no-preload-008483 kubelet[4204]: E1101 01:20:39.363427    4204 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qcxt7" podUID="bf444b92-dd54-43fc-a9a8-0e9000b562e3"
	Nov 01 01:20:52 no-preload-008483 kubelet[4204]: E1101 01:20:52.362552    4204 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qcxt7" podUID="bf444b92-dd54-43fc-a9a8-0e9000b562e3"
	
	* 
	* ==> storage-provisioner [e91e26d41e22a636f3966fdb1dd6db999eae4ea6cff3e1290036854c8960f051] <==
	* I1101 01:06:51.323773       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 01:06:51.351818       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 01:06:51.351989       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1101 01:06:51.367052       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 01:06:51.369588       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-008483_71f990fe-abf4-4bd5-b75e-38511119a99b!
	I1101 01:06:51.370831       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cae153b0-0fc0-420c-8f0e-867709ef7140", APIVersion:"v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-008483_71f990fe-abf4-4bd5-b75e-38511119a99b became leader
	I1101 01:06:51.471073       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-008483_71f990fe-abf4-4bd5-b75e-38511119a99b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-008483 -n no-preload-008483
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-008483 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-qcxt7
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-008483 describe pod metrics-server-57f55c9bc5-qcxt7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-008483 describe pod metrics-server-57f55c9bc5-qcxt7: exit status 1 (67.950666ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-qcxt7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-008483 describe pod metrics-server-57f55c9bc5-qcxt7: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (309.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (225.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1101 01:17:16.006966   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/functional-736766/client.crt: no such file or directory
E1101 01:17:43.437698   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/kindnet-090856/client.crt: no such file or directory
E1101 01:18:02.504116   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt: no such file or directory
E1101 01:18:32.448077   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/calico-090856/client.crt: no such file or directory
E1101 01:19:51.060178   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/custom-flannel-090856/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-330042 -n old-k8s-version-330042
start_stop_delete_test.go:287: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-11-01 01:20:22.923291743 +0000 UTC m=+5796.487873986
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-330042 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-330042 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.076µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-330042 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-330042 -n old-k8s-version-330042
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-330042 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-330042 logs -n 25: (1.577606033s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p flannel-090856 sudo                                 | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | containerd config dump                                 |                              |         |                |                     |                     |
	| ssh     | -p flannel-090856 sudo                                 | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | systemctl status crio --all                            |                              |         |                |                     |                     |
	|         | --full --no-pager                                      |                              |         |                |                     |                     |
	| ssh     | -p flannel-090856 sudo                                 | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |                |                     |                     |
	| start   | -p embed-certs-754132                                  | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:52 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| ssh     | -p flannel-090856 sudo find                            | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |                |                     |                     |
	| ssh     | -p flannel-090856 sudo crio                            | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | config                                                 |                              |         |                |                     |                     |
	| delete  | -p flannel-090856                                      | flannel-090856               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	| delete  | -p                                                     | disable-driver-mounts-130996 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:51 UTC |
	|         | disable-driver-mounts-130996                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:51 UTC | 01 Nov 23 00:53 UTC |
	|         | default-k8s-diff-port-639310                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-008483             | no-preload-008483            | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC | 01 Nov 23 00:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-008483                                   | no-preload-008483            | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-754132            | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC | 01 Nov 23 00:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-754132                                  | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-330042        | old-k8s-version-330042       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC | 01 Nov 23 00:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-330042                              | old-k8s-version-330042       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-639310  | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:53 UTC | 01 Nov 23 00:53 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:53 UTC |                     |
	|         | default-k8s-diff-port-639310                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-008483                  | no-preload-008483            | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-754132                 | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-008483                                   | no-preload-008483            | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC | 01 Nov 23 01:06 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| start   | -p embed-certs-754132                                  | embed-certs-754132           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC | 01 Nov 23 01:05 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-330042             | old-k8s-version-330042       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-330042                              | old-k8s-version-330042       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:55 UTC | 01 Nov 23 01:07 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-639310       | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-639310 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:56 UTC | 01 Nov 23 01:06 UTC |
	|         | default-k8s-diff-port-639310                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/01 00:56:25
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 00:56:25.029853   59148 out.go:296] Setting OutFile to fd 1 ...
	I1101 00:56:25.030119   59148 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:56:25.030128   59148 out.go:309] Setting ErrFile to fd 2...
	I1101 00:56:25.030133   59148 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:56:25.030311   59148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7305/.minikube/bin
	I1101 00:56:25.030856   59148 out.go:303] Setting JSON to false
	I1101 00:56:25.031741   59148 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5930,"bootTime":1698794255,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 00:56:25.031805   59148 start.go:138] virtualization: kvm guest
	I1101 00:56:25.034341   59148 out.go:177] * [default-k8s-diff-port-639310] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1101 00:56:25.036261   59148 out.go:177]   - MINIKUBE_LOCATION=17486
	I1101 00:56:25.037829   59148 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 00:56:25.036294   59148 notify.go:220] Checking for updates...
	I1101 00:56:25.041068   59148 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 00:56:25.042691   59148 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7305/.minikube
	I1101 00:56:25.044204   59148 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 00:56:25.045719   59148 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 00:56:25.047781   59148 config.go:182] Loaded profile config "default-k8s-diff-port-639310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:56:25.048183   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 00:56:25.048245   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:56:25.062714   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34345
	I1101 00:56:25.063108   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:56:25.063662   59148 main.go:141] libmachine: Using API Version  1
	I1101 00:56:25.063682   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:56:25.064083   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:56:25.064302   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 00:56:25.064571   59148 driver.go:378] Setting default libvirt URI to qemu:///system
	I1101 00:56:25.064917   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 00:56:25.064958   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:56:25.079214   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46451
	I1101 00:56:25.079576   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:56:25.080090   59148 main.go:141] libmachine: Using API Version  1
	I1101 00:56:25.080115   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:56:25.080419   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:56:25.080616   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 00:56:25.119015   59148 out.go:177] * Using the kvm2 driver based on existing profile
	I1101 00:56:25.120650   59148 start.go:298] selected driver: kvm2
	I1101 00:56:25.120670   59148 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-639310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-639310 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.97 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:56:25.120819   59148 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 00:56:25.121515   59148 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:56:25.121580   59148 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17486-7305/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1101 00:56:25.137482   59148 install.go:137] /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1101 00:56:25.137885   59148 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 00:56:25.137962   59148 cni.go:84] Creating CNI manager for ""
	I1101 00:56:25.137976   59148 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 00:56:25.137988   59148 start_flags.go:323] config:
	{Name:default-k8s-diff-port-639310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-63931
0 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.97 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:56:25.138186   59148 iso.go:125] acquiring lock: {Name:mk1f649ca0b7c1ae293cd66cb85f9eeda028b20b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:56:25.140405   59148 out.go:177] * Starting control plane node default-k8s-diff-port-639310 in cluster default-k8s-diff-port-639310
	I1101 00:56:25.141855   59148 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 00:56:25.141918   59148 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1101 00:56:25.141935   59148 cache.go:56] Caching tarball of preloaded images
	I1101 00:56:25.142048   59148 preload.go:174] Found /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 00:56:25.142066   59148 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1101 00:56:25.142204   59148 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/config.json ...
	I1101 00:56:25.142449   59148 start.go:365] acquiring machines lock for default-k8s-diff-port-639310: {Name:mk7aad88408c319111b9be8e59d9593a9e88374b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 00:56:26.060176   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:56:29.132322   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:56:35.212221   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:56:38.284225   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:56:44.364219   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:56:47.436224   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:56:53.516201   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:56:56.588256   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:02.668213   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:05.740252   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:11.820242   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:14.892259   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:20.972213   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:24.044181   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:30.124291   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:33.196239   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:39.276183   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:42.348235   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:48.428230   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:51.500275   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:57:57.580250   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:00.652208   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:06.732207   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:09.804251   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:15.884265   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:18.956206   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:25.040217   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:28.108288   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:34.188238   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:37.260268   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:43.340210   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:46.412248   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:52.492221   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:58:55.564188   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:01.644193   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:04.716194   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:10.796265   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:13.868226   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:19.948219   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:23.020283   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:29.100251   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:32.172268   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:38.252219   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:41.324223   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:47.404323   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:50.476273   58676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I1101 00:59:53.480339   58730 start.go:369] acquired machines lock for "embed-certs-754132" in 4m35.118425724s
	I1101 00:59:53.480387   58730 start.go:96] Skipping create...Using existing machine configuration
	I1101 00:59:53.480393   58730 fix.go:54] fixHost starting: 
	I1101 00:59:53.480707   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 00:59:53.480737   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:59:53.495582   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34891
	I1101 00:59:53.495998   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:59:53.496445   58730 main.go:141] libmachine: Using API Version  1
	I1101 00:59:53.496466   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:59:53.496844   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:59:53.497017   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 00:59:53.497171   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetState
	I1101 00:59:53.498937   58730 fix.go:102] recreateIfNeeded on embed-certs-754132: state=Stopped err=<nil>
	I1101 00:59:53.498956   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	W1101 00:59:53.499128   58730 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 00:59:53.500909   58730 out.go:177] * Restarting existing kvm2 VM for "embed-certs-754132" ...
	I1101 00:59:53.478140   58676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 00:59:53.478177   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 00:59:53.480187   58676 machine.go:91] provisioned docker machine in 4m37.408348367s
	I1101 00:59:53.480232   58676 fix.go:56] fixHost completed within 4m37.430154401s
	I1101 00:59:53.480241   58676 start.go:83] releasing machines lock for "no-preload-008483", held for 4m37.430178737s
	W1101 00:59:53.480270   58676 start.go:691] error starting host: provision: host is not running
	W1101 00:59:53.480361   58676 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1101 00:59:53.480371   58676 start.go:706] Will try again in 5 seconds ...
	I1101 00:59:53.502467   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Start
	I1101 00:59:53.502656   58730 main.go:141] libmachine: (embed-certs-754132) Ensuring networks are active...
	I1101 00:59:53.503633   58730 main.go:141] libmachine: (embed-certs-754132) Ensuring network default is active
	I1101 00:59:53.504036   58730 main.go:141] libmachine: (embed-certs-754132) Ensuring network mk-embed-certs-754132 is active
	I1101 00:59:53.504557   58730 main.go:141] libmachine: (embed-certs-754132) Getting domain xml...
	I1101 00:59:53.505302   58730 main.go:141] libmachine: (embed-certs-754132) Creating domain...
	I1101 00:59:54.749625   58730 main.go:141] libmachine: (embed-certs-754132) Waiting to get IP...
	I1101 00:59:54.750551   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:54.750924   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:54.751002   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:54.750917   59675 retry.go:31] will retry after 295.652358ms: waiting for machine to come up
	I1101 00:59:55.048450   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:55.048884   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:55.048910   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:55.048845   59675 retry.go:31] will retry after 335.376353ms: waiting for machine to come up
	I1101 00:59:55.385612   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:55.385959   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:55.386000   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:55.385952   59675 retry.go:31] will retry after 353.381783ms: waiting for machine to come up
	I1101 00:59:55.740456   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:55.740943   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:55.740979   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:55.740874   59675 retry.go:31] will retry after 417.863733ms: waiting for machine to come up
	I1101 00:59:56.160773   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:56.161271   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:56.161298   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:56.161236   59675 retry.go:31] will retry after 659.454883ms: waiting for machine to come up
	I1101 00:59:56.822139   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:56.822551   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:56.822573   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:56.822511   59675 retry.go:31] will retry after 627.06089ms: waiting for machine to come up
	I1101 00:59:57.451254   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:57.451659   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:57.451687   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:57.451624   59675 retry.go:31] will retry after 1.095096876s: waiting for machine to come up
	I1101 00:59:58.481145   58676 start.go:365] acquiring machines lock for no-preload-008483: {Name:mk7aad88408c319111b9be8e59d9593a9e88374b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 00:59:58.548870   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:58.549359   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:58.549410   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:58.549323   59675 retry.go:31] will retry after 1.133377858s: waiting for machine to come up
	I1101 00:59:59.684741   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 00:59:59.685182   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 00:59:59.685205   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 00:59:59.685149   59675 retry.go:31] will retry after 1.332824718s: waiting for machine to come up
	I1101 01:00:01.019662   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:01.020166   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 01:00:01.020217   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 01:00:01.020119   59675 retry.go:31] will retry after 1.62664347s: waiting for machine to come up
	I1101 01:00:02.649017   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:02.649459   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 01:00:02.649490   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 01:00:02.649404   59675 retry.go:31] will retry after 2.043788133s: waiting for machine to come up
	I1101 01:00:04.695225   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:04.695657   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 01:00:04.695711   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 01:00:04.695640   59675 retry.go:31] will retry after 2.435347975s: waiting for machine to come up
	I1101 01:00:07.133078   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:07.133531   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 01:00:07.133567   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 01:00:07.133492   59675 retry.go:31] will retry after 2.768108097s: waiting for machine to come up
	I1101 01:00:09.903094   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:09.903460   58730 main.go:141] libmachine: (embed-certs-754132) DBG | unable to find current IP address of domain embed-certs-754132 in network mk-embed-certs-754132
	I1101 01:00:09.903484   58730 main.go:141] libmachine: (embed-certs-754132) DBG | I1101 01:00:09.903424   59675 retry.go:31] will retry after 3.955575113s: waiting for machine to come up
	I1101 01:00:15.240546   58823 start.go:369] acquired machines lock for "old-k8s-version-330042" in 4m47.663537715s
	I1101 01:00:15.240608   58823 start.go:96] Skipping create...Using existing machine configuration
	I1101 01:00:15.240616   58823 fix.go:54] fixHost starting: 
	I1101 01:00:15.241087   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:00:15.241135   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:00:15.260921   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45157
	I1101 01:00:15.261342   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:00:15.261921   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:00:15.261954   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:00:15.262285   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:00:15.262488   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:00:15.262657   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetState
	I1101 01:00:15.264332   58823 fix.go:102] recreateIfNeeded on old-k8s-version-330042: state=Stopped err=<nil>
	I1101 01:00:15.264357   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	W1101 01:00:15.264541   58823 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 01:00:15.266960   58823 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-330042" ...
	I1101 01:00:13.860184   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.860818   58730 main.go:141] libmachine: (embed-certs-754132) Found IP for machine: 192.168.61.83
	I1101 01:00:13.860849   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has current primary IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.860866   58730 main.go:141] libmachine: (embed-certs-754132) Reserving static IP address...
	I1101 01:00:13.861321   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "embed-certs-754132", mac: "52:54:00:5e:2f:dd", ip: "192.168.61.83"} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:13.861350   58730 main.go:141] libmachine: (embed-certs-754132) Reserved static IP address: 192.168.61.83
	I1101 01:00:13.861362   58730 main.go:141] libmachine: (embed-certs-754132) DBG | skip adding static IP to network mk-embed-certs-754132 - found existing host DHCP lease matching {name: "embed-certs-754132", mac: "52:54:00:5e:2f:dd", ip: "192.168.61.83"}
	I1101 01:00:13.861372   58730 main.go:141] libmachine: (embed-certs-754132) Waiting for SSH to be available...
	I1101 01:00:13.861384   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Getting to WaitForSSH function...
	I1101 01:00:13.864760   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.865204   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:13.865232   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.865368   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Using SSH client type: external
	I1101 01:00:13.865408   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa (-rw-------)
	I1101 01:00:13.865434   58730 main.go:141] libmachine: (embed-certs-754132) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.83 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1101 01:00:13.865446   58730 main.go:141] libmachine: (embed-certs-754132) DBG | About to run SSH command:
	I1101 01:00:13.865454   58730 main.go:141] libmachine: (embed-certs-754132) DBG | exit 0
	I1101 01:00:13.964103   58730 main.go:141] libmachine: (embed-certs-754132) DBG | SSH cmd err, output: <nil>: 
	I1101 01:00:13.964444   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetConfigRaw
	I1101 01:00:13.965066   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetIP
	I1101 01:00:13.967463   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.967768   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:13.967791   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.968100   58730 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/config.json ...
	I1101 01:00:13.968294   58730 machine.go:88] provisioning docker machine ...
	I1101 01:00:13.968312   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:00:13.968530   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetMachineName
	I1101 01:00:13.968707   58730 buildroot.go:166] provisioning hostname "embed-certs-754132"
	I1101 01:00:13.968728   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetMachineName
	I1101 01:00:13.968901   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:13.971288   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.971637   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:13.971676   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:13.971792   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:13.972000   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:13.972181   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:13.972312   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:13.972476   58730 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:13.972798   58730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I1101 01:00:13.972812   58730 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-754132 && echo "embed-certs-754132" | sudo tee /etc/hostname
	I1101 01:00:14.121000   58730 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-754132
	
	I1101 01:00:14.121036   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:14.124379   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.124813   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:14.124840   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.125085   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:14.125339   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:14.125667   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:14.125832   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:14.126091   58730 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:14.126401   58730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I1101 01:00:14.126418   58730 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-754132' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-754132/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-754132' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 01:00:14.268155   58730 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 01:00:14.268188   58730 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7305/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7305/.minikube}
	I1101 01:00:14.268210   58730 buildroot.go:174] setting up certificates
	I1101 01:00:14.268238   58730 provision.go:83] configureAuth start
	I1101 01:00:14.268255   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetMachineName
	I1101 01:00:14.268542   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetIP
	I1101 01:00:14.271516   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.271946   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:14.271984   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.272150   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:14.274610   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.275017   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:14.275054   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.275206   58730 provision.go:138] copyHostCerts
	I1101 01:00:14.275269   58730 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem, removing ...
	I1101 01:00:14.275282   58730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 01:00:14.275351   58730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem (1078 bytes)
	I1101 01:00:14.275442   58730 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem, removing ...
	I1101 01:00:14.275450   58730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 01:00:14.275475   58730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem (1123 bytes)
	I1101 01:00:14.275526   58730 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem, removing ...
	I1101 01:00:14.275533   58730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 01:00:14.275571   58730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem (1675 bytes)
	I1101 01:00:14.275616   58730 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem org=jenkins.embed-certs-754132 san=[192.168.61.83 192.168.61.83 localhost 127.0.0.1 minikube embed-certs-754132]
	I1101 01:00:14.494175   58730 provision.go:172] copyRemoteCerts
	I1101 01:00:14.494239   58730 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 01:00:14.494265   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:14.496921   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.497263   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:14.497310   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.497482   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:14.497748   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:14.497906   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:14.498052   58730 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa Username:docker}
	I1101 01:00:14.592739   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 01:00:14.614862   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1101 01:00:14.636483   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1101 01:00:14.658154   58730 provision.go:86] duration metric: configureAuth took 389.900669ms
	I1101 01:00:14.658179   58730 buildroot.go:189] setting minikube options for container-runtime
	I1101 01:00:14.658364   58730 config.go:182] Loaded profile config "embed-certs-754132": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:00:14.658478   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:14.661110   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.661450   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:14.661500   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.661667   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:14.661853   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:14.661997   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:14.662120   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:14.662279   58730 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:14.662573   58730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I1101 01:00:14.662589   58730 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 01:00:14.974481   58730 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 01:00:14.974505   58730 machine.go:91] provisioned docker machine in 1.006198078s
	I1101 01:00:14.974521   58730 start.go:300] post-start starting for "embed-certs-754132" (driver="kvm2")
	I1101 01:00:14.974534   58730 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 01:00:14.974556   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:00:14.974913   58730 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 01:00:14.974946   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:14.977485   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.977815   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:14.977846   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:14.977970   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:14.978146   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:14.978310   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:14.978470   58730 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa Username:docker}
	I1101 01:00:15.073889   58730 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 01:00:15.077710   58730 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 01:00:15.077734   58730 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/addons for local assets ...
	I1101 01:00:15.077791   58730 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/files for local assets ...
	I1101 01:00:15.077855   58730 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> 145042.pem in /etc/ssl/certs
	I1101 01:00:15.077961   58730 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 01:00:15.086567   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:00:15.107446   58730 start.go:303] post-start completed in 132.911351ms
	I1101 01:00:15.107468   58730 fix.go:56] fixHost completed within 21.627074953s
	I1101 01:00:15.107485   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:15.110070   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.110392   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:15.110426   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.110552   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:15.110748   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:15.110914   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:15.111078   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:15.111268   58730 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:15.111683   58730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I1101 01:00:15.111696   58730 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1101 01:00:15.240326   58730 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698800415.188118531
	
	I1101 01:00:15.240357   58730 fix.go:206] guest clock: 1698800415.188118531
	I1101 01:00:15.240365   58730 fix.go:219] Guest: 2023-11-01 01:00:15.188118531 +0000 UTC Remote: 2023-11-01 01:00:15.107470988 +0000 UTC m=+296.909935143 (delta=80.647543ms)
	I1101 01:00:15.240385   58730 fix.go:190] guest clock delta is within tolerance: 80.647543ms
	I1101 01:00:15.240420   58730 start.go:83] releasing machines lock for "embed-certs-754132", held for 21.760022516s
	I1101 01:00:15.240464   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:00:15.240736   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetIP
	I1101 01:00:15.243570   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.243905   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:15.243961   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.244163   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:00:15.244698   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:00:15.244872   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:00:15.244948   58730 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 01:00:15.245012   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:15.245063   58730 ssh_runner.go:195] Run: cat /version.json
	I1101 01:00:15.245089   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:00:15.247618   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.247886   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.247985   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:15.248018   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.248265   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:15.248358   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:15.248387   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:15.248422   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:15.248600   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:00:15.248601   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:15.248774   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:00:15.248765   58730 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa Username:docker}
	I1101 01:00:15.248913   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:00:15.249034   58730 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa Username:docker}
	I1101 01:00:15.383514   58730 ssh_runner.go:195] Run: systemctl --version
	I1101 01:00:15.389291   58730 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 01:00:15.531982   58730 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 01:00:15.537622   58730 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 01:00:15.537711   58730 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 01:00:15.554440   58730 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 01:00:15.554488   58730 start.go:472] detecting cgroup driver to use...
	I1101 01:00:15.554549   58730 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 01:00:15.569732   58730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 01:00:15.582752   58730 docker.go:204] disabling cri-docker service (if available) ...
	I1101 01:00:15.582795   58730 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 01:00:15.596221   58730 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 01:00:15.609815   58730 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 01:00:15.717679   58730 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 01:00:15.842128   58730 docker.go:220] disabling docker service ...
	I1101 01:00:15.842203   58730 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 01:00:15.854613   58730 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 01:00:15.869487   58730 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 01:00:15.991107   58730 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 01:00:16.118392   58730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 01:00:16.131570   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 01:00:16.150691   58730 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 01:00:16.150755   58730 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:16.160081   58730 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 01:00:16.160171   58730 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:16.170277   58730 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:16.180469   58730 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:16.189966   58730 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 01:00:16.199465   58730 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 01:00:16.207995   58730 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 01:00:16.208057   58730 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 01:00:16.221491   58730 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 01:00:16.231855   58730 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 01:00:16.355227   58730 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 01:00:16.520341   58730 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 01:00:16.520403   58730 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 01:00:16.525071   58730 start.go:540] Will wait 60s for crictl version
	I1101 01:00:16.525143   58730 ssh_runner.go:195] Run: which crictl
	I1101 01:00:16.529138   58730 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 01:00:16.566007   58730 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1101 01:00:16.566082   58730 ssh_runner.go:195] Run: crio --version
	I1101 01:00:16.612652   58730 ssh_runner.go:195] Run: crio --version
	I1101 01:00:16.665668   58730 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1101 01:00:15.268389   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Start
	I1101 01:00:15.268575   58823 main.go:141] libmachine: (old-k8s-version-330042) Ensuring networks are active...
	I1101 01:00:15.269280   58823 main.go:141] libmachine: (old-k8s-version-330042) Ensuring network default is active
	I1101 01:00:15.269618   58823 main.go:141] libmachine: (old-k8s-version-330042) Ensuring network mk-old-k8s-version-330042 is active
	I1101 01:00:15.270056   58823 main.go:141] libmachine: (old-k8s-version-330042) Getting domain xml...
	I1101 01:00:15.270814   58823 main.go:141] libmachine: (old-k8s-version-330042) Creating domain...
	I1101 01:00:16.566526   58823 main.go:141] libmachine: (old-k8s-version-330042) Waiting to get IP...
	I1101 01:00:16.567713   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:16.568239   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:16.568336   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:16.568220   59797 retry.go:31] will retry after 200.046919ms: waiting for machine to come up
	I1101 01:00:16.769849   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:16.770436   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:16.770477   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:16.770427   59797 retry.go:31] will retry after 301.397937ms: waiting for machine to come up
	I1101 01:00:17.074180   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:17.074657   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:17.074689   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:17.074626   59797 retry.go:31] will retry after 462.511505ms: waiting for machine to come up
	I1101 01:00:16.667657   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetIP
	I1101 01:00:16.670756   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:16.671148   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:00:16.671216   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:00:16.671377   58730 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1101 01:00:16.675342   58730 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:00:16.687224   58730 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 01:00:16.687310   58730 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:00:16.726714   58730 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1101 01:00:16.726779   58730 ssh_runner.go:195] Run: which lz4
	I1101 01:00:16.730745   58730 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1101 01:00:16.734588   58730 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 01:00:16.734623   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1101 01:00:17.538840   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:17.539313   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:17.539337   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:17.539276   59797 retry.go:31] will retry after 562.894181ms: waiting for machine to come up
	I1101 01:00:18.104173   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:18.104678   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:18.104712   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:18.104641   59797 retry.go:31] will retry after 659.582768ms: waiting for machine to come up
	I1101 01:00:18.766319   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:18.766719   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:18.766749   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:18.766688   59797 retry.go:31] will retry after 626.783168ms: waiting for machine to come up
	I1101 01:00:19.395203   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:19.395693   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:19.395720   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:19.395651   59797 retry.go:31] will retry after 884.294618ms: waiting for machine to come up
	I1101 01:00:20.281677   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:20.282152   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:20.282176   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:20.282094   59797 retry.go:31] will retry after 997.794459ms: waiting for machine to come up
	I1101 01:00:21.281118   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:21.281568   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:21.281596   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:21.281525   59797 retry.go:31] will retry after 1.624252325s: waiting for machine to come up
	I1101 01:00:18.514400   58730 crio.go:444] Took 1.783693 seconds to copy over tarball
	I1101 01:00:18.514460   58730 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 01:00:21.481089   58730 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.966600648s)
	I1101 01:00:21.481118   58730 crio.go:451] Took 2.966695 seconds to extract the tarball
	I1101 01:00:21.481130   58730 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 01:00:21.520934   58730 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:00:21.568541   58730 crio.go:496] all images are preloaded for cri-o runtime.
	I1101 01:00:21.568569   58730 cache_images.go:84] Images are preloaded, skipping loading
	I1101 01:00:21.568638   58730 ssh_runner.go:195] Run: crio config
	I1101 01:00:21.626687   58730 cni.go:84] Creating CNI manager for ""
	I1101 01:00:21.626707   58730 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:00:21.626724   58730 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 01:00:21.626745   58730 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.83 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-754132 NodeName:embed-certs-754132 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.83"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.83 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 01:00:21.626906   58730 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.83
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-754132"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.83
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.83"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 01:00:21.627000   58730 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-754132 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.83
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:embed-certs-754132 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 01:00:21.627062   58730 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 01:00:21.635965   58730 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 01:00:21.636048   58730 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 01:00:21.644318   58730 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1101 01:00:21.659722   58730 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 01:00:21.674541   58730 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1101 01:00:21.690451   58730 ssh_runner.go:195] Run: grep 192.168.61.83	control-plane.minikube.internal$ /etc/hosts
	I1101 01:00:21.694013   58730 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.83	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:00:21.705929   58730 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132 for IP: 192.168.61.83
	I1101 01:00:21.705978   58730 certs.go:190] acquiring lock for shared ca certs: {Name:mk6185b3938c4b51e80f57b7c81926c81b632b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:00:21.706152   58730 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key
	I1101 01:00:21.706193   58730 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key
	I1101 01:00:21.706255   58730 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/client.key
	I1101 01:00:21.706321   58730 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/apiserver.key.00ce3257
	I1101 01:00:21.706365   58730 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/proxy-client.key
	I1101 01:00:21.706507   58730 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem (1338 bytes)
	W1101 01:00:21.706541   58730 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504_empty.pem, impossibly tiny 0 bytes
	I1101 01:00:21.706552   58730 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 01:00:21.706580   58730 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem (1078 bytes)
	I1101 01:00:21.706606   58730 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem (1123 bytes)
	I1101 01:00:21.706633   58730 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem (1675 bytes)
	I1101 01:00:21.706670   58730 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:00:21.707263   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 01:00:21.734199   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 01:00:21.760230   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 01:00:21.787083   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/embed-certs-754132/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 01:00:21.810498   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 01:00:21.833905   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 01:00:21.859073   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 01:00:21.881222   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 01:00:21.904432   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem --> /usr/share/ca-certificates/14504.pem (1338 bytes)
	I1101 01:00:21.934873   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /usr/share/ca-certificates/145042.pem (1708 bytes)
	I1101 01:00:21.958353   58730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 01:00:21.981353   58730 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 01:00:21.997436   58730 ssh_runner.go:195] Run: openssl version
	I1101 01:00:22.003487   58730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14504.pem && ln -fs /usr/share/ca-certificates/14504.pem /etc/ssl/certs/14504.pem"
	I1101 01:00:22.013829   58730 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14504.pem
	I1101 01:00:22.018482   58730 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:54 /usr/share/ca-certificates/14504.pem
	I1101 01:00:22.018554   58730 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14504.pem
	I1101 01:00:22.024695   58730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14504.pem /etc/ssl/certs/51391683.0"
	I1101 01:00:22.034956   58730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145042.pem && ln -fs /usr/share/ca-certificates/145042.pem /etc/ssl/certs/145042.pem"
	I1101 01:00:22.046182   58730 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145042.pem
	I1101 01:00:22.051197   58730 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:54 /usr/share/ca-certificates/145042.pem
	I1101 01:00:22.051273   58730 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145042.pem
	I1101 01:00:22.057145   58730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145042.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 01:00:22.067337   58730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 01:00:22.077300   58730 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:00:22.081973   58730 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:00:22.082025   58730 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:00:22.087341   58730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 01:00:22.097021   58730 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 01:00:22.101801   58730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 01:00:22.107498   58730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 01:00:22.113187   58730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 01:00:22.119281   58730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 01:00:22.125109   58730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 01:00:22.130878   58730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 01:00:22.136711   58730 kubeadm.go:404] StartCluster: {Name:embed-certs-754132 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:embed-certs-754132 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.83 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 01:00:22.136843   58730 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 01:00:22.136898   58730 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:00:22.172188   58730 cri.go:89] found id: ""
	I1101 01:00:22.172267   58730 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 01:00:22.181863   58730 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1101 01:00:22.181901   58730 kubeadm.go:636] restartCluster start
	I1101 01:00:22.181962   58730 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 01:00:22.190970   58730 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:22.192108   58730 kubeconfig.go:92] found "embed-certs-754132" server: "https://192.168.61.83:8443"
	I1101 01:00:22.194633   58730 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 01:00:22.203708   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:22.203792   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:22.214867   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:22.214889   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:22.214972   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:22.225940   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:22.726677   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:22.726769   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:22.737874   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:23.226416   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:23.226492   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:23.237902   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:22.907053   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:22.907532   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:22.907563   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:22.907487   59797 retry.go:31] will retry after 2.170221456s: waiting for machine to come up
	I1101 01:00:25.079354   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:25.079791   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:25.079831   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:25.079754   59797 retry.go:31] will retry after 2.279141994s: waiting for machine to come up
	I1101 01:00:27.361955   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:27.362423   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:27.362456   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:27.362368   59797 retry.go:31] will retry after 2.772425742s: waiting for machine to come up
	I1101 01:00:23.726108   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:23.726179   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:23.737404   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:24.226007   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:24.226178   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:24.237401   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:24.727058   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:24.727152   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:24.742704   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:25.226166   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:25.226272   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:25.237808   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:25.726161   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:25.726244   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:25.737763   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:26.226321   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:26.226485   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:26.239919   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:26.726488   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:26.726596   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:26.740719   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:27.226157   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:27.226268   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:27.240719   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:27.726272   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:27.726360   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:27.738068   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:28.226882   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:28.226954   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:28.239208   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:30.136893   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:30.137311   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | unable to find current IP address of domain old-k8s-version-330042 in network mk-old-k8s-version-330042
	I1101 01:00:30.137333   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | I1101 01:00:30.137274   59797 retry.go:31] will retry after 4.191062934s: waiting for machine to come up
	I1101 01:00:28.726726   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:28.726845   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:28.737955   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:29.226410   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:29.226475   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:29.237886   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:29.726367   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:29.726461   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:29.737767   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:30.226294   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:30.226389   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:30.237767   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:30.726295   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:30.726363   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:30.737691   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:31.226274   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:31.226343   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:31.237801   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:31.726297   58730 api_server.go:166] Checking apiserver status ...
	I1101 01:00:31.726366   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:31.738060   58730 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:32.204696   58730 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1101 01:00:32.204729   58730 kubeadm.go:1128] stopping kube-system containers ...
	I1101 01:00:32.204741   58730 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 01:00:32.204792   58730 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:00:32.241943   58730 cri.go:89] found id: ""
	I1101 01:00:32.242012   58730 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 01:00:32.256657   58730 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:00:32.265087   58730 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:00:32.265159   58730 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:00:32.273631   58730 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1101 01:00:32.273654   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:32.379073   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:35.634014   59148 start.go:369] acquired machines lock for "default-k8s-diff-port-639310" in 4m10.491521982s
	I1101 01:00:35.634070   59148 start.go:96] Skipping create...Using existing machine configuration
	I1101 01:00:35.634078   59148 fix.go:54] fixHost starting: 
	I1101 01:00:35.634533   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:00:35.634577   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:00:35.654259   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46439
	I1101 01:00:35.654746   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:00:35.655216   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:00:35.655245   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:00:35.655578   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:00:35.655759   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:00:35.655905   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetState
	I1101 01:00:35.657604   59148 fix.go:102] recreateIfNeeded on default-k8s-diff-port-639310: state=Stopped err=<nil>
	I1101 01:00:35.657646   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	W1101 01:00:35.657804   59148 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 01:00:35.660028   59148 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-639310" ...
	I1101 01:00:34.332963   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.333486   58823 main.go:141] libmachine: (old-k8s-version-330042) Found IP for machine: 192.168.39.90
	I1101 01:00:34.333518   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has current primary IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.333529   58823 main.go:141] libmachine: (old-k8s-version-330042) Reserving static IP address...
	I1101 01:00:34.333853   58823 main.go:141] libmachine: (old-k8s-version-330042) Reserved static IP address: 192.168.39.90
	I1101 01:00:34.333874   58823 main.go:141] libmachine: (old-k8s-version-330042) Waiting for SSH to be available...
	I1101 01:00:34.333901   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "old-k8s-version-330042", mac: "52:54:00:a2:40:80", ip: "192.168.39.90"} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.333932   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | skip adding static IP to network mk-old-k8s-version-330042 - found existing host DHCP lease matching {name: "old-k8s-version-330042", mac: "52:54:00:a2:40:80", ip: "192.168.39.90"}
	I1101 01:00:34.333954   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Getting to WaitForSSH function...
	I1101 01:00:34.335871   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.336238   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.336275   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.336409   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Using SSH client type: external
	I1101 01:00:34.336446   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa (-rw-------)
	I1101 01:00:34.336480   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.90 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1101 01:00:34.336501   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | About to run SSH command:
	I1101 01:00:34.336523   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | exit 0
	I1101 01:00:34.431938   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | SSH cmd err, output: <nil>: 
	I1101 01:00:34.432324   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetConfigRaw
	I1101 01:00:34.433070   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetIP
	I1101 01:00:34.435967   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.436402   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.436434   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.436696   58823 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/config.json ...
	I1101 01:00:34.436886   58823 machine.go:88] provisioning docker machine ...
	I1101 01:00:34.436903   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:00:34.437136   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetMachineName
	I1101 01:00:34.437299   58823 buildroot.go:166] provisioning hostname "old-k8s-version-330042"
	I1101 01:00:34.437323   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetMachineName
	I1101 01:00:34.437508   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:34.439785   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.440175   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.440215   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.440316   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:34.440481   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:34.440662   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:34.440800   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:34.440965   58823 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:34.441440   58823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1101 01:00:34.441461   58823 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-330042 && echo "old-k8s-version-330042" | sudo tee /etc/hostname
	I1101 01:00:34.590132   58823 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-330042
	
	I1101 01:00:34.590168   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:34.593018   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.593457   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.593521   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.593623   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:34.593817   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:34.594004   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:34.594151   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:34.594317   58823 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:34.594622   58823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1101 01:00:34.594640   58823 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-330042' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-330042/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-330042' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 01:00:34.743448   58823 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 01:00:34.743485   58823 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7305/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7305/.minikube}
	I1101 01:00:34.743510   58823 buildroot.go:174] setting up certificates
	I1101 01:00:34.743530   58823 provision.go:83] configureAuth start
	I1101 01:00:34.743545   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetMachineName
	I1101 01:00:34.743848   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetIP
	I1101 01:00:34.746932   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.747302   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.747333   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.747478   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:34.749794   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.750154   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.750185   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.750339   58823 provision.go:138] copyHostCerts
	I1101 01:00:34.750412   58823 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem, removing ...
	I1101 01:00:34.750435   58823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 01:00:34.750504   58823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem (1078 bytes)
	I1101 01:00:34.750620   58823 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem, removing ...
	I1101 01:00:34.750628   58823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 01:00:34.750655   58823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem (1123 bytes)
	I1101 01:00:34.750726   58823 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem, removing ...
	I1101 01:00:34.750736   58823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 01:00:34.750761   58823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem (1675 bytes)
	I1101 01:00:34.750820   58823 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-330042 san=[192.168.39.90 192.168.39.90 localhost 127.0.0.1 minikube old-k8s-version-330042]
	I1101 01:00:34.819269   58823 provision.go:172] copyRemoteCerts
	I1101 01:00:34.819327   58823 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 01:00:34.819354   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:34.822409   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.822852   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:34.822887   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:34.823101   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:34.823335   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:34.823520   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:34.823688   58823 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa Username:docker}
	I1101 01:00:34.928534   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 01:00:34.955140   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1101 01:00:34.982361   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 01:00:35.007980   58823 provision.go:86] duration metric: configureAuth took 264.432358ms
	I1101 01:00:35.008007   58823 buildroot.go:189] setting minikube options for container-runtime
	I1101 01:00:35.008317   58823 config.go:182] Loaded profile config "old-k8s-version-330042": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1101 01:00:35.008450   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:35.011424   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.011790   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:35.011820   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.012054   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:35.012305   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:35.012505   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:35.012692   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:35.012898   58823 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:35.013292   58823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1101 01:00:35.013310   58823 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 01:00:35.345179   58823 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 01:00:35.345210   58823 machine.go:91] provisioned docker machine in 908.310008ms
	I1101 01:00:35.345224   58823 start.go:300] post-start starting for "old-k8s-version-330042" (driver="kvm2")
	I1101 01:00:35.345236   58823 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 01:00:35.345283   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:00:35.345634   58823 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 01:00:35.345666   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:35.348576   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.348945   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:35.348978   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.349171   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:35.349364   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:35.349527   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:35.349672   58823 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa Username:docker}
	I1101 01:00:35.448239   58823 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 01:00:35.453459   58823 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 01:00:35.453495   58823 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/addons for local assets ...
	I1101 01:00:35.453589   58823 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/files for local assets ...
	I1101 01:00:35.453705   58823 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> 145042.pem in /etc/ssl/certs
	I1101 01:00:35.453819   58823 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 01:00:35.464658   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:00:35.488669   58823 start.go:303] post-start completed in 143.429717ms
	I1101 01:00:35.488699   58823 fix.go:56] fixHost completed within 20.248082329s
	I1101 01:00:35.488723   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:35.491535   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.491917   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:35.491962   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.492108   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:35.492302   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:35.492472   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:35.492610   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:35.492777   58823 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:35.493085   58823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1101 01:00:35.493097   58823 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1101 01:00:35.633831   58823 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698800435.580601462
	
	I1101 01:00:35.633860   58823 fix.go:206] guest clock: 1698800435.580601462
	I1101 01:00:35.633872   58823 fix.go:219] Guest: 2023-11-01 01:00:35.580601462 +0000 UTC Remote: 2023-11-01 01:00:35.488703086 +0000 UTC m=+308.076532844 (delta=91.898376ms)
	I1101 01:00:35.633899   58823 fix.go:190] guest clock delta is within tolerance: 91.898376ms
	I1101 01:00:35.633906   58823 start.go:83] releasing machines lock for "old-k8s-version-330042", held for 20.393324923s
	I1101 01:00:35.633937   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:00:35.634276   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetIP
	I1101 01:00:35.637052   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.637411   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:35.637462   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.637668   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:00:35.638239   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:00:35.638479   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:00:35.638661   58823 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 01:00:35.638703   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:35.638792   58823 ssh_runner.go:195] Run: cat /version.json
	I1101 01:00:35.638813   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:00:35.641913   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:35.641919   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.642071   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:35.642094   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.642106   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.642151   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:35.642323   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:35.642517   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:35.642547   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:35.642608   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:00:35.642640   58823 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa Username:docker}
	I1101 01:00:35.642736   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:00:35.642872   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:00:35.642994   58823 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa Username:docker}
	I1101 01:00:35.772469   58823 ssh_runner.go:195] Run: systemctl --version
	I1101 01:00:35.778377   58823 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 01:00:35.930189   58823 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 01:00:35.937481   58823 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 01:00:35.937583   58823 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 01:00:35.959054   58823 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 01:00:35.959081   58823 start.go:472] detecting cgroup driver to use...
	I1101 01:00:35.959166   58823 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 01:00:35.978338   58823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 01:00:35.994627   58823 docker.go:204] disabling cri-docker service (if available) ...
	I1101 01:00:35.994690   58823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 01:00:36.010212   58823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 01:00:36.025616   58823 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 01:00:36.132484   58823 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 01:00:36.266531   58823 docker.go:220] disabling docker service ...
	I1101 01:00:36.266604   58823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 01:00:36.280303   58823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 01:00:36.291905   58823 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 01:00:36.413114   58823 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 01:00:36.527297   58823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 01:00:36.540547   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 01:00:36.561997   58823 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1101 01:00:36.562070   58823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:36.574735   58823 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 01:00:36.574809   58823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:36.584015   58823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:36.592896   58823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:36.602199   58823 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 01:00:36.611742   58823 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 01:00:36.620073   58823 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 01:00:36.620140   58823 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 01:00:36.633237   58823 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 01:00:36.641679   58823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 01:00:36.786323   58823 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 01:00:37.011240   58823 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 01:00:37.011332   58823 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 01:00:37.016349   58823 start.go:540] Will wait 60s for crictl version
	I1101 01:00:37.016417   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:37.020952   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 01:00:37.068566   58823 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1101 01:00:37.068649   58823 ssh_runner.go:195] Run: crio --version
	I1101 01:00:37.119257   58823 ssh_runner.go:195] Run: crio --version
	I1101 01:00:37.170471   58823 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1101 01:00:37.172128   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetIP
	I1101 01:00:37.175116   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:37.175552   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:00:37.175583   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:00:37.175834   58823 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1101 01:00:37.179970   58823 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:00:37.193466   58823 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1101 01:00:37.193550   58823 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:00:37.239780   58823 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1101 01:00:37.239851   58823 ssh_runner.go:195] Run: which lz4
	I1101 01:00:37.243871   58823 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1101 01:00:37.248203   58823 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 01:00:37.248243   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1101 01:00:33.273385   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:33.468847   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:33.558663   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:33.632226   58730 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:00:33.632305   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:33.645291   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:34.159920   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:34.660339   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:35.159837   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:35.659362   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:36.159870   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:36.189698   58730 api_server.go:72] duration metric: took 2.557471176s to wait for apiserver process to appear ...
	I1101 01:00:36.189726   58730 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:00:36.189746   58730 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8443/healthz ...
	I1101 01:00:35.662001   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Start
	I1101 01:00:35.662248   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Ensuring networks are active...
	I1101 01:00:35.663075   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Ensuring network default is active
	I1101 01:00:35.663589   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Ensuring network mk-default-k8s-diff-port-639310 is active
	I1101 01:00:35.664066   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Getting domain xml...
	I1101 01:00:35.664780   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Creating domain...
	I1101 01:00:37.046385   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting to get IP...
	I1101 01:00:37.047592   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.048056   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.048160   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:37.048064   59967 retry.go:31] will retry after 244.19131ms: waiting for machine to come up
	I1101 01:00:37.293636   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.294421   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.294535   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:37.294483   59967 retry.go:31] will retry after 281.302105ms: waiting for machine to come up
	I1101 01:00:37.577271   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.577934   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.577962   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:37.577874   59967 retry.go:31] will retry after 376.713113ms: waiting for machine to come up
	I1101 01:00:37.956666   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.957154   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:37.957182   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:37.957125   59967 retry.go:31] will retry after 366.92844ms: waiting for machine to come up
	I1101 01:00:38.325741   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:38.326257   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:38.326291   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:38.326226   59967 retry.go:31] will retry after 478.435824ms: waiting for machine to come up
	I1101 01:00:38.806215   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:38.806928   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:38.806965   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:38.806904   59967 retry.go:31] will retry after 910.120665ms: waiting for machine to come up
	I1101 01:00:39.718641   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:39.719281   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:39.719307   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:39.719210   59967 retry.go:31] will retry after 1.017844602s: waiting for machine to come up
	I1101 01:00:40.636542   58730 api_server.go:279] https://192.168.61.83:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 01:00:40.636586   58730 api_server.go:103] status: https://192.168.61.83:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 01:00:40.636602   58730 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8443/healthz ...
	I1101 01:00:40.687211   58730 api_server.go:279] https://192.168.61.83:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 01:00:40.687258   58730 api_server.go:103] status: https://192.168.61.83:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 01:00:41.187988   58730 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8443/healthz ...
	I1101 01:00:41.197585   58730 api_server.go:279] https://192.168.61.83:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:00:41.197626   58730 api_server.go:103] status: https://192.168.61.83:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:00:41.688019   58730 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8443/healthz ...
	I1101 01:00:41.698406   58730 api_server.go:279] https://192.168.61.83:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:00:41.698439   58730 api_server.go:103] status: https://192.168.61.83:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:00:42.188141   58730 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8443/healthz ...
	I1101 01:00:42.195663   58730 api_server.go:279] https://192.168.61.83:8443/healthz returned 200:
	ok
	I1101 01:00:42.204715   58730 api_server.go:141] control plane version: v1.28.3
	I1101 01:00:42.204746   58730 api_server.go:131] duration metric: took 6.015012484s to wait for apiserver health ...
	I1101 01:00:42.204756   58730 cni.go:84] Creating CNI manager for ""
	I1101 01:00:42.204764   58730 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:00:42.206831   58730 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:00:38.979032   58823 crio.go:444] Took 1.735199 seconds to copy over tarball
	I1101 01:00:38.979127   58823 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 01:00:42.235526   58823 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.256363592s)
	I1101 01:00:42.235558   58823 crio.go:451] Took 3.256498 seconds to extract the tarball
	I1101 01:00:42.235592   58823 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 01:00:42.278508   58823 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:00:42.332199   58823 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1101 01:00:42.332225   58823 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1101 01:00:42.332323   58823 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:00:42.332383   58823 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1101 01:00:42.332425   58823 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1101 01:00:42.332445   58823 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1101 01:00:42.332394   58823 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1101 01:00:42.332554   58823 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1101 01:00:42.332552   58823 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1101 01:00:42.332611   58823 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1101 01:00:42.333952   58823 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1101 01:00:42.333965   58823 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1101 01:00:42.333971   58823 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1101 01:00:42.333973   58823 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:00:42.333951   58823 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1101 01:00:42.333959   58823 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1101 01:00:42.334015   58823 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1101 01:00:42.334422   58823 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1101 01:00:42.208425   58730 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:00:42.243672   58730 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:00:42.270472   58730 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:00:40.739283   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:40.739839   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:40.739871   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:40.739751   59967 retry.go:31] will retry after 924.830892ms: waiting for machine to come up
	I1101 01:00:41.666231   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:41.666922   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:41.666949   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:41.666878   59967 retry.go:31] will retry after 1.792434708s: waiting for machine to come up
	I1101 01:00:43.461158   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:43.461723   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:43.461758   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:43.461651   59967 retry.go:31] will retry after 1.458280506s: waiting for machine to come up
	I1101 01:00:44.921321   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:44.922072   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:44.922105   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:44.922018   59967 retry.go:31] will retry after 2.732488928s: waiting for machine to come up
	I1101 01:00:42.548949   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1101 01:00:42.549011   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1101 01:00:42.552787   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1101 01:00:42.554125   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1101 01:00:42.559301   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1101 01:00:42.560733   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1101 01:00:42.564609   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1101 01:00:42.857456   58823 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1101 01:00:42.857497   58823 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1101 01:00:42.857537   58823 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1101 01:00:42.857565   58823 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1101 01:00:42.857583   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:42.857502   58823 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1101 01:00:42.857597   58823 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1101 01:00:42.857644   58823 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1101 01:00:42.857703   58823 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1101 01:00:42.857733   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:42.857663   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:42.857666   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:42.880301   58823 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1101 01:00:42.880350   58823 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1101 01:00:42.880362   58823 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1101 01:00:42.880404   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:42.880421   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1101 01:00:42.880432   58823 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1101 01:00:42.880473   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:42.880475   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1101 01:00:42.880542   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1101 01:00:42.880377   58823 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1101 01:00:42.880587   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1101 01:00:42.880610   58823 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1101 01:00:42.880663   58823 ssh_runner.go:195] Run: which crictl
	I1101 01:00:42.958449   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1101 01:00:42.975151   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1101 01:00:42.975188   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1101 01:00:42.979136   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1101 01:00:42.979198   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1101 01:00:42.979246   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1101 01:00:42.979306   58823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1101 01:00:43.059447   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1101 01:00:43.059470   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1101 01:00:43.059515   58823 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1101 01:00:43.059572   58823 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1101 01:00:43.065313   58823 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1101 01:00:43.065337   58823 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1101 01:00:43.065397   58823 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1101 01:00:43.212775   58823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:00:44.821509   58823 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.756075689s)
	I1101 01:00:44.821542   58823 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1101 01:00:44.821600   58823 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.608800531s)
	I1101 01:00:44.821639   58823 cache_images.go:92] LoadImages completed in 2.489401317s
	W1101 01:00:44.821749   58823 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0: no such file or directory
	I1101 01:00:44.821888   58823 ssh_runner.go:195] Run: crio config
	I1101 01:00:44.911017   58823 cni.go:84] Creating CNI manager for ""
	I1101 01:00:44.911094   58823 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:00:44.911132   58823 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 01:00:44.911173   58823 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.90 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-330042 NodeName:old-k8s-version-330042 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1101 01:00:44.911365   58823 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-330042"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.90
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.90"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-330042
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.90:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 01:00:44.911510   58823 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-330042 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-330042 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 01:00:44.911601   58823 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1101 01:00:44.925733   58823 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 01:00:44.925810   58823 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 01:00:44.939166   58823 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1101 01:00:44.962847   58823 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 01:00:44.986855   58823 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I1101 01:00:45.011998   58823 ssh_runner.go:195] Run: grep 192.168.39.90	control-plane.minikube.internal$ /etc/hosts
	I1101 01:00:45.017160   58823 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.90	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:00:45.035826   58823 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042 for IP: 192.168.39.90
	I1101 01:00:45.035866   58823 certs.go:190] acquiring lock for shared ca certs: {Name:mk6185b3938c4b51e80f57b7c81926c81b632b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:00:45.036097   58823 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key
	I1101 01:00:45.036161   58823 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key
	I1101 01:00:45.036276   58823 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/client.key
	I1101 01:00:45.036363   58823 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/apiserver.key.05a13cdc
	I1101 01:00:45.036423   58823 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/proxy-client.key
	I1101 01:00:45.036600   58823 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem (1338 bytes)
	W1101 01:00:45.036642   58823 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504_empty.pem, impossibly tiny 0 bytes
	I1101 01:00:45.036657   58823 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 01:00:45.036697   58823 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem (1078 bytes)
	I1101 01:00:45.036734   58823 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem (1123 bytes)
	I1101 01:00:45.036769   58823 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem (1675 bytes)
	I1101 01:00:45.036837   58823 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:00:45.037808   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 01:00:45.071828   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 01:00:45.105069   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 01:00:45.136650   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/old-k8s-version-330042/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 01:00:45.169633   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 01:00:45.202102   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 01:00:45.234227   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 01:00:45.265901   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 01:00:45.297720   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem --> /usr/share/ca-certificates/14504.pem (1338 bytes)
	I1101 01:00:45.330915   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /usr/share/ca-certificates/145042.pem (1708 bytes)
	I1101 01:00:45.361364   58823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 01:00:45.391023   58823 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 01:00:45.412643   58823 ssh_runner.go:195] Run: openssl version
	I1101 01:00:45.419938   58823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145042.pem && ln -fs /usr/share/ca-certificates/145042.pem /etc/ssl/certs/145042.pem"
	I1101 01:00:45.433972   58823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145042.pem
	I1101 01:00:45.439966   58823 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:54 /usr/share/ca-certificates/145042.pem
	I1101 01:00:45.440070   58823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145042.pem
	I1101 01:00:45.447248   58823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145042.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 01:00:45.461261   58823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 01:00:45.475166   58823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:00:45.481174   58823 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:00:45.481281   58823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:00:45.488190   58823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 01:00:45.502428   58823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14504.pem && ln -fs /usr/share/ca-certificates/14504.pem /etc/ssl/certs/14504.pem"
	I1101 01:00:45.515353   58823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14504.pem
	I1101 01:00:45.520135   58823 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:54 /usr/share/ca-certificates/14504.pem
	I1101 01:00:45.520196   58823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14504.pem
	I1101 01:00:45.525605   58823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14504.pem /etc/ssl/certs/51391683.0"
	I1101 01:00:45.535886   58823 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 01:00:45.540671   58823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 01:00:45.546973   58823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 01:00:45.554439   58823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 01:00:45.562216   58823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 01:00:45.570082   58823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 01:00:45.578073   58823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 01:00:45.586056   58823 kubeadm.go:404] StartCluster: {Name:old-k8s-version-330042 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-330042 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 01:00:45.586202   58823 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 01:00:45.586270   58823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:00:45.632205   58823 cri.go:89] found id: ""
	I1101 01:00:45.632279   58823 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 01:00:45.646397   58823 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1101 01:00:45.646432   58823 kubeadm.go:636] restartCluster start
	I1101 01:00:45.646492   58823 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 01:00:45.660754   58823 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:45.662302   58823 kubeconfig.go:92] found "old-k8s-version-330042" server: "https://192.168.39.90:8443"
	I1101 01:00:45.665617   58823 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 01:00:45.679127   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:45.679203   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:45.697578   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:45.697601   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:45.697662   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:45.715086   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:46.215841   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:46.215939   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:46.233039   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:46.715162   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:46.715283   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:46.727101   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:47.215409   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:47.215512   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:47.228104   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:43.297105   58730 system_pods.go:59] 9 kube-system pods found
	I1101 01:00:43.452043   58730 system_pods.go:61] "coredns-5dd5756b68-9hvh7" [d7d126c2-c270-452c-b939-15303a174742] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 01:00:43.452062   58730 system_pods.go:61] "coredns-5dd5756b68-gptmc" [fbbb9f17-32d6-456d-8171-eadcf64b11a8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 01:00:43.452074   58730 system_pods.go:61] "etcd-embed-certs-754132" [3c7474c1-788e-461d-bd20-e62c3c12cf27] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 01:00:43.452086   58730 system_pods.go:61] "kube-apiserver-embed-certs-754132" [d218a8d6-536c-400a-b81e-325b89ab475b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 01:00:43.452116   58730 system_pods.go:61] "kube-controller-manager-embed-certs-754132" [930b7861-b807-4f24-ba3c-9365a1e8dd8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 01:00:43.452128   58730 system_pods.go:61] "kube-proxy-d5d5x" [c7a6d923-0b37-452b-9979-0a64c05ee737] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 01:00:43.452142   58730 system_pods.go:61] "kube-scheduler-embed-certs-754132" [fd9c0833-f9d4-41cf-b5dd-b676ea5da6ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 01:00:43.452156   58730 system_pods.go:61] "metrics-server-57f55c9bc5-znchz" [60da0fbf-a2c4-4910-b06b-251b33b8ad0b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:00:43.452169   58730 system_pods.go:61] "storage-provisioner" [fbece4fb-6f83-4f17-acb8-94f493dd72e9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 01:00:43.452185   58730 system_pods.go:74] duration metric: took 1.181683794s to wait for pod list to return data ...
	I1101 01:00:43.452198   58730 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:00:44.181694   58730 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:00:44.181739   58730 node_conditions.go:123] node cpu capacity is 2
	I1101 01:00:44.181756   58730 node_conditions.go:105] duration metric: took 729.549671ms to run NodePressure ...
	I1101 01:00:44.181784   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:45.274729   58730 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.092921592s)
	I1101 01:00:45.274761   58730 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1101 01:00:45.285444   58730 kubeadm.go:787] kubelet initialised
	I1101 01:00:45.285478   58730 kubeadm.go:788] duration metric: took 10.704919ms waiting for restarted kubelet to initialise ...
	I1101 01:00:45.285489   58730 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:00:45.303122   58730 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-9hvh7" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:47.333376   58730 pod_ready.go:92] pod "coredns-5dd5756b68-9hvh7" in "kube-system" namespace has status "Ready":"True"
	I1101 01:00:47.333404   58730 pod_ready.go:81] duration metric: took 2.030252648s waiting for pod "coredns-5dd5756b68-9hvh7" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:47.333415   58730 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-gptmc" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:47.339165   58730 pod_ready.go:92] pod "coredns-5dd5756b68-gptmc" in "kube-system" namespace has status "Ready":"True"
	I1101 01:00:47.339189   58730 pod_ready.go:81] duration metric: took 5.76803ms waiting for pod "coredns-5dd5756b68-gptmc" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:47.339201   58730 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:47.656259   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:47.656733   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:47.656767   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:47.656688   59967 retry.go:31] will retry after 3.546373187s: waiting for machine to come up
	I1101 01:00:47.716219   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:47.716302   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:47.729221   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:48.215453   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:48.215562   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:48.230259   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:48.715905   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:48.716035   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:48.729001   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:49.216123   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:49.216217   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:49.232128   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:49.715640   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:49.715708   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:49.729098   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:50.215271   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:50.215379   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:50.228075   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:50.715151   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:50.715256   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:50.726839   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:51.215204   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:51.215293   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:51.227412   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:51.715753   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:51.715870   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:51.728794   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:52.215318   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:52.215437   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:52.227527   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:48.860188   58730 pod_ready.go:92] pod "etcd-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:00:48.860215   58730 pod_ready.go:81] duration metric: took 1.521005544s waiting for pod "etcd-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:48.860228   58730 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:50.286848   58730 pod_ready.go:92] pod "kube-apiserver-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:00:50.286882   58730 pod_ready.go:81] duration metric: took 1.426640629s waiting for pod "kube-apiserver-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:50.286894   58730 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:51.886531   58730 pod_ready.go:92] pod "kube-controller-manager-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:00:51.886555   58730 pod_ready.go:81] duration metric: took 1.599653882s waiting for pod "kube-controller-manager-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:51.886565   58730 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-d5d5x" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:52.079723   58730 pod_ready.go:92] pod "kube-proxy-d5d5x" in "kube-system" namespace has status "Ready":"True"
	I1101 01:00:52.079752   58730 pod_ready.go:81] duration metric: took 193.181169ms waiting for pod "kube-proxy-d5d5x" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:52.079766   58730 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:51.204423   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:51.204909   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | unable to find current IP address of domain default-k8s-diff-port-639310 in network mk-default-k8s-diff-port-639310
	I1101 01:00:51.204945   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | I1101 01:00:51.204854   59967 retry.go:31] will retry after 3.382936792s: waiting for machine to come up
	I1101 01:00:54.588976   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.589398   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Found IP for machine: 192.168.72.97
	I1101 01:00:54.589427   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Reserving static IP address...
	I1101 01:00:54.589447   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has current primary IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.589764   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Reserved static IP address: 192.168.72.97
	I1101 01:00:54.589783   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Waiting for SSH to be available...
	I1101 01:00:54.589811   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-639310", mac: "52:54:00:83:e0:44", ip: "192.168.72.97"} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:54.589841   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | skip adding static IP to network mk-default-k8s-diff-port-639310 - found existing host DHCP lease matching {name: "default-k8s-diff-port-639310", mac: "52:54:00:83:e0:44", ip: "192.168.72.97"}
	I1101 01:00:54.589858   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | Getting to WaitForSSH function...
	I1101 01:00:54.591920   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.592295   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:54.592327   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.592518   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | Using SSH client type: external
	I1101 01:00:54.592546   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa (-rw-------)
	I1101 01:00:54.592568   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.97 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1101 01:00:54.592581   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | About to run SSH command:
	I1101 01:00:54.592605   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | exit 0
	I1101 01:00:54.687664   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | SSH cmd err, output: <nil>: 
	I1101 01:00:54.688005   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetConfigRaw
	I1101 01:00:54.688653   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetIP
	I1101 01:00:54.691258   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.691761   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:54.691803   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.692096   59148 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/config.json ...
	I1101 01:00:54.692278   59148 machine.go:88] provisioning docker machine ...
	I1101 01:00:54.692297   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:00:54.692554   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetMachineName
	I1101 01:00:54.692765   59148 buildroot.go:166] provisioning hostname "default-k8s-diff-port-639310"
	I1101 01:00:54.692787   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetMachineName
	I1101 01:00:54.692962   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:54.695491   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.695887   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:54.695917   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.696074   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:54.696280   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:54.696477   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:54.696624   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:54.696817   59148 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:54.697275   59148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.97 22 <nil> <nil>}
	I1101 01:00:54.697298   59148 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-639310 && echo "default-k8s-diff-port-639310" | sudo tee /etc/hostname
	I1101 01:00:54.836084   59148 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-639310
	
	I1101 01:00:54.836118   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:54.839109   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.839437   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:54.839463   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.839732   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:54.839986   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:54.840131   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:54.840290   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:54.840501   59148 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:54.840865   59148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.97 22 <nil> <nil>}
	I1101 01:00:54.840885   59148 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-639310' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-639310/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-639310' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 01:00:54.979804   59148 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 01:00:54.979841   59148 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7305/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7305/.minikube}
	I1101 01:00:54.979870   59148 buildroot.go:174] setting up certificates
	I1101 01:00:54.979881   59148 provision.go:83] configureAuth start
	I1101 01:00:54.979898   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetMachineName
	I1101 01:00:54.980246   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetIP
	I1101 01:00:54.983397   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.983760   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:54.983794   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.984029   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:54.986746   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.987112   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:54.987160   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:54.987328   59148 provision.go:138] copyHostCerts
	I1101 01:00:54.987418   59148 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem, removing ...
	I1101 01:00:54.987437   59148 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 01:00:54.987507   59148 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem (1078 bytes)
	I1101 01:00:54.987619   59148 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem, removing ...
	I1101 01:00:54.987628   59148 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 01:00:54.987651   59148 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem (1123 bytes)
	I1101 01:00:54.987707   59148 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem, removing ...
	I1101 01:00:54.987714   59148 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 01:00:54.987731   59148 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem (1675 bytes)
	I1101 01:00:54.987773   59148 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-639310 san=[192.168.72.97 192.168.72.97 localhost 127.0.0.1 minikube default-k8s-diff-port-639310]
	I1101 01:00:56.081549   58676 start.go:369] acquired machines lock for "no-preload-008483" in 57.600332472s
	I1101 01:00:56.081600   58676 start.go:96] Skipping create...Using existing machine configuration
	I1101 01:00:56.081611   58676 fix.go:54] fixHost starting: 
	I1101 01:00:56.082003   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:00:56.082041   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:00:56.098896   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33091
	I1101 01:00:56.099300   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:00:56.099786   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:00:56.099817   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:00:56.100159   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:00:56.100364   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:00:56.100511   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetState
	I1101 01:00:56.104041   58676 fix.go:102] recreateIfNeeded on no-preload-008483: state=Stopped err=<nil>
	I1101 01:00:56.104071   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	W1101 01:00:56.104250   58676 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 01:00:56.106287   58676 out.go:177] * Restarting existing kvm2 VM for "no-preload-008483" ...
	I1101 01:00:52.715585   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:52.715665   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:52.726877   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:53.216119   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:53.216202   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:53.228700   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:53.715253   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:53.715342   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:53.729029   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:54.215451   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:54.215554   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:54.228217   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:54.715451   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:54.715513   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:54.727356   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:55.216034   58823 api_server.go:166] Checking apiserver status ...
	I1101 01:00:55.216130   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:00:55.227905   58823 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:00:55.680067   58823 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1101 01:00:55.680120   58823 kubeadm.go:1128] stopping kube-system containers ...
	I1101 01:00:55.680135   58823 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 01:00:55.680204   58823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:00:55.726658   58823 cri.go:89] found id: ""
	I1101 01:00:55.726744   58823 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 01:00:55.748477   58823 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:00:55.758933   58823 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:00:55.759013   58823 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:00:55.769130   58823 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1101 01:00:55.769156   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:55.911136   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:57.164062   58823 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.252874473s)
	I1101 01:00:57.164095   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:57.403267   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:55.270327   59148 provision.go:172] copyRemoteCerts
	I1101 01:00:55.270394   59148 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 01:00:55.270418   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:55.272988   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.273410   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:55.273444   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.273609   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:55.273818   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:55.273966   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:55.274113   59148 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa Username:docker}
	I1101 01:00:55.367354   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 01:00:55.391069   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1101 01:00:55.413001   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 01:00:55.436904   59148 provision.go:86] duration metric: configureAuth took 457.006108ms
	I1101 01:00:55.436930   59148 buildroot.go:189] setting minikube options for container-runtime
	I1101 01:00:55.437115   59148 config.go:182] Loaded profile config "default-k8s-diff-port-639310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:00:55.437187   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:55.440286   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.440627   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:55.440662   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.440789   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:55.440989   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:55.441187   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:55.441330   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:55.441491   59148 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:55.441905   59148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.97 22 <nil> <nil>}
	I1101 01:00:55.441928   59148 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 01:00:55.788340   59148 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 01:00:55.788372   59148 machine.go:91] provisioned docker machine in 1.096081387s
	I1101 01:00:55.788386   59148 start.go:300] post-start starting for "default-k8s-diff-port-639310" (driver="kvm2")
	I1101 01:00:55.788401   59148 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 01:00:55.788443   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:00:55.788777   59148 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 01:00:55.788846   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:55.792110   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.792594   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:55.792626   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.792829   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:55.793080   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:55.793273   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:55.793421   59148 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa Username:docker}
	I1101 01:00:55.893108   59148 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 01:00:55.898425   59148 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 01:00:55.898452   59148 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/addons for local assets ...
	I1101 01:00:55.898530   59148 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/files for local assets ...
	I1101 01:00:55.898619   59148 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> 145042.pem in /etc/ssl/certs
	I1101 01:00:55.898751   59148 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 01:00:55.909396   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:00:55.943412   59148 start.go:303] post-start completed in 154.998365ms
	I1101 01:00:55.943440   59148 fix.go:56] fixHost completed within 20.309363198s
	I1101 01:00:55.943464   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:55.946417   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.946777   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:55.946810   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:55.947048   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:55.947268   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:55.947484   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:55.947662   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:55.947849   59148 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:55.948212   59148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.97 22 <nil> <nil>}
	I1101 01:00:55.948225   59148 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1101 01:00:56.081387   59148 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698800456.033536949
	
	I1101 01:00:56.081411   59148 fix.go:206] guest clock: 1698800456.033536949
	I1101 01:00:56.081422   59148 fix.go:219] Guest: 2023-11-01 01:00:56.033536949 +0000 UTC Remote: 2023-11-01 01:00:55.943445038 +0000 UTC m=+270.963710441 (delta=90.091911ms)
	I1101 01:00:56.081446   59148 fix.go:190] guest clock delta is within tolerance: 90.091911ms
	I1101 01:00:56.081451   59148 start.go:83] releasing machines lock for "default-k8s-diff-port-639310", held for 20.447404197s
	I1101 01:00:56.081484   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:00:56.081826   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetIP
	I1101 01:00:56.084827   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:56.085289   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:56.085326   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:56.085543   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:00:56.086049   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:00:56.086272   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:00:56.086374   59148 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 01:00:56.086425   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:56.086677   59148 ssh_runner.go:195] Run: cat /version.json
	I1101 01:00:56.086709   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:00:56.089377   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:56.089696   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:56.089784   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:56.089841   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:56.090077   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:56.090088   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:56.090108   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:56.090256   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:00:56.090329   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:56.090405   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:00:56.090479   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:56.090557   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:00:56.090613   59148 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa Username:docker}
	I1101 01:00:56.090681   59148 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa Username:docker}
	I1101 01:00:56.220669   59148 ssh_runner.go:195] Run: systemctl --version
	I1101 01:00:56.226971   59148 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 01:00:56.375845   59148 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 01:00:56.383893   59148 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 01:00:56.383986   59148 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 01:00:56.404009   59148 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 01:00:56.404035   59148 start.go:472] detecting cgroup driver to use...
	I1101 01:00:56.404107   59148 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 01:00:56.420015   59148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 01:00:56.435577   59148 docker.go:204] disabling cri-docker service (if available) ...
	I1101 01:00:56.435652   59148 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 01:00:56.448542   59148 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 01:00:56.465197   59148 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 01:00:56.607142   59148 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 01:00:56.739287   59148 docker.go:220] disabling docker service ...
	I1101 01:00:56.739366   59148 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 01:00:56.753861   59148 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 01:00:56.768891   59148 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 01:00:56.893929   59148 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 01:00:57.022891   59148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 01:00:57.039063   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 01:00:57.058893   59148 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 01:00:57.058964   59148 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:57.070769   59148 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 01:00:57.070845   59148 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:57.082528   59148 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:57.094350   59148 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:57.105953   59148 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 01:00:57.117745   59148 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 01:00:57.128493   59148 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 01:00:57.128553   59148 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 01:00:57.145858   59148 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 01:00:57.157318   59148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 01:00:57.288371   59148 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 01:00:57.489356   59148 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 01:00:57.489458   59148 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 01:00:57.495837   59148 start.go:540] Will wait 60s for crictl version
	I1101 01:00:57.495907   59148 ssh_runner.go:195] Run: which crictl
	I1101 01:00:57.500572   59148 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 01:00:57.546076   59148 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1101 01:00:57.546245   59148 ssh_runner.go:195] Run: crio --version
	I1101 01:00:57.601745   59148 ssh_runner.go:195] Run: crio --version
	I1101 01:00:57.664097   59148 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1101 01:00:54.387561   58730 pod_ready.go:102] pod "kube-scheduler-embed-certs-754132" in "kube-system" namespace has status "Ready":"False"
	I1101 01:00:56.388062   58730 pod_ready.go:92] pod "kube-scheduler-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:00:56.388085   58730 pod_ready.go:81] duration metric: took 4.308312567s waiting for pod "kube-scheduler-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:56.388094   58730 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace to be "Ready" ...
	I1101 01:00:57.666096   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetIP
	I1101 01:00:57.670028   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:57.670437   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:00:57.670472   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:00:57.670760   59148 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1101 01:00:57.675850   59148 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:00:57.689379   59148 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 01:00:57.689439   59148 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:00:57.736333   59148 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1101 01:00:57.736404   59148 ssh_runner.go:195] Run: which lz4
	I1101 01:00:57.740532   59148 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1101 01:00:57.745488   59148 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 01:00:57.745535   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1101 01:00:59.649981   59148 crio.go:444] Took 1.909486 seconds to copy over tarball
	I1101 01:00:59.650070   59148 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 01:00:56.107642   58676 main.go:141] libmachine: (no-preload-008483) Calling .Start
	I1101 01:00:56.107815   58676 main.go:141] libmachine: (no-preload-008483) Ensuring networks are active...
	I1101 01:00:56.108696   58676 main.go:141] libmachine: (no-preload-008483) Ensuring network default is active
	I1101 01:00:56.109190   58676 main.go:141] libmachine: (no-preload-008483) Ensuring network mk-no-preload-008483 is active
	I1101 01:00:56.109623   58676 main.go:141] libmachine: (no-preload-008483) Getting domain xml...
	I1101 01:00:56.110400   58676 main.go:141] libmachine: (no-preload-008483) Creating domain...
	I1101 01:00:57.626479   58676 main.go:141] libmachine: (no-preload-008483) Waiting to get IP...
	I1101 01:00:57.627653   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:00:57.628245   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:00:57.628315   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:00:57.628210   60142 retry.go:31] will retry after 306.868541ms: waiting for machine to come up
	I1101 01:00:57.936854   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:00:57.937358   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:00:57.937392   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:00:57.937309   60142 retry.go:31] will retry after 366.94808ms: waiting for machine to come up
	I1101 01:00:58.306219   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:00:58.306880   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:00:58.306909   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:00:58.306815   60142 retry.go:31] will retry after 470.784378ms: waiting for machine to come up
	I1101 01:00:58.781164   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:00:58.781784   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:00:58.781810   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:00:58.781686   60142 retry.go:31] will retry after 475.883045ms: waiting for machine to come up
	I1101 01:00:59.259400   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:00:59.259922   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:00:59.259964   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:00:59.259816   60142 retry.go:31] will retry after 533.372113ms: waiting for machine to come up
	I1101 01:00:59.794619   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:00:59.795307   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:00:59.795335   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:00:59.795222   60142 retry.go:31] will retry after 643.335947ms: waiting for machine to come up
	I1101 01:01:00.440339   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:00.440876   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:01:00.440901   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:01:00.440795   60142 retry.go:31] will retry after 899.488876ms: waiting for machine to come up
	I1101 01:00:57.545316   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:00:57.641733   58823 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:00:57.641812   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:57.655826   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:58.173767   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:58.674113   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:59.174394   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:59.674240   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:00:59.705758   58823 api_server.go:72] duration metric: took 2.064024888s to wait for apiserver process to appear ...
	I1101 01:00:59.705791   58823 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:00:59.705814   58823 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I1101 01:00:58.517913   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:00.993028   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:03.059373   59148 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.409271602s)
	I1101 01:01:03.059403   59148 crio.go:451] Took 3.409395 seconds to extract the tarball
	I1101 01:01:03.059413   59148 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 01:01:03.101818   59148 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:01:03.153263   59148 crio.go:496] all images are preloaded for cri-o runtime.
	I1101 01:01:03.153284   59148 cache_images.go:84] Images are preloaded, skipping loading
	I1101 01:01:03.153341   59148 ssh_runner.go:195] Run: crio config
	I1101 01:01:03.228205   59148 cni.go:84] Creating CNI manager for ""
	I1101 01:01:03.228225   59148 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:01:03.228241   59148 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 01:01:03.228265   59148 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.97 APIServerPort:8444 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-639310 NodeName:default-k8s-diff-port-639310 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 01:01:03.228386   59148 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.97
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-639310"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 01:01:03.228463   59148 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-639310 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-639310 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1101 01:01:03.228517   59148 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 01:01:03.240926   59148 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 01:01:03.241014   59148 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 01:01:03.253440   59148 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I1101 01:01:03.271480   59148 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 01:01:03.292784   59148 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I1101 01:01:03.315295   59148 ssh_runner.go:195] Run: grep 192.168.72.97	control-plane.minikube.internal$ /etc/hosts
	I1101 01:01:03.319922   59148 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:01:03.332820   59148 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310 for IP: 192.168.72.97
	I1101 01:01:03.332869   59148 certs.go:190] acquiring lock for shared ca certs: {Name:mk6185b3938c4b51e80f57b7c81926c81b632b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:01:03.333015   59148 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key
	I1101 01:01:03.333067   59148 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key
	I1101 01:01:03.333174   59148 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/client.key
	I1101 01:01:03.333255   59148 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/apiserver.key.6d6df538
	I1101 01:01:03.333307   59148 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/proxy-client.key
	I1101 01:01:03.333469   59148 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem (1338 bytes)
	W1101 01:01:03.333531   59148 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504_empty.pem, impossibly tiny 0 bytes
	I1101 01:01:03.333542   59148 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 01:01:03.333580   59148 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem (1078 bytes)
	I1101 01:01:03.333632   59148 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem (1123 bytes)
	I1101 01:01:03.333699   59148 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem (1675 bytes)
	I1101 01:01:03.333761   59148 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:01:03.334633   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 01:01:03.361740   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 01:01:03.387535   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 01:01:03.414252   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/default-k8s-diff-port-639310/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 01:01:03.438492   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 01:01:03.463501   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 01:01:03.489800   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 01:01:03.517317   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 01:01:03.543330   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem --> /usr/share/ca-certificates/14504.pem (1338 bytes)
	I1101 01:01:03.567744   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /usr/share/ca-certificates/145042.pem (1708 bytes)
	I1101 01:01:03.594230   59148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 01:01:03.620857   59148 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 01:01:03.638676   59148 ssh_runner.go:195] Run: openssl version
	I1101 01:01:03.644139   59148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14504.pem && ln -fs /usr/share/ca-certificates/14504.pem /etc/ssl/certs/14504.pem"
	I1101 01:01:03.654667   59148 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14504.pem
	I1101 01:01:03.659261   59148 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:54 /usr/share/ca-certificates/14504.pem
	I1101 01:01:03.659322   59148 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14504.pem
	I1101 01:01:03.664592   59148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14504.pem /etc/ssl/certs/51391683.0"
	I1101 01:01:03.675482   59148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145042.pem && ln -fs /usr/share/ca-certificates/145042.pem /etc/ssl/certs/145042.pem"
	I1101 01:01:03.687903   59148 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145042.pem
	I1101 01:01:03.692901   59148 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:54 /usr/share/ca-certificates/145042.pem
	I1101 01:01:03.692970   59148 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145042.pem
	I1101 01:01:03.698691   59148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145042.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 01:01:03.709971   59148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 01:01:03.720612   59148 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:01:03.725306   59148 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:01:03.725397   59148 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:01:03.731004   59148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 01:01:03.743558   59148 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 01:01:03.748428   59148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 01:01:03.754404   59148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 01:01:03.760210   59148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 01:01:03.765964   59148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 01:01:03.771813   59148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 01:01:03.777659   59148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 01:01:03.783754   59148 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-639310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.3 ClusterName:default-k8s-diff-port-639310 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.97 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 01:01:03.783846   59148 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 01:01:03.783903   59148 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:01:03.823390   59148 cri.go:89] found id: ""
	I1101 01:01:03.823473   59148 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 01:01:03.835317   59148 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1101 01:01:03.835339   59148 kubeadm.go:636] restartCluster start
	I1101 01:01:03.835393   59148 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 01:01:03.845532   59148 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:03.846629   59148 kubeconfig.go:92] found "default-k8s-diff-port-639310" server: "https://192.168.72.97:8444"
	I1101 01:01:03.849176   59148 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 01:01:03.859318   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:03.859387   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:03.871598   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:03.871620   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:03.871682   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:03.882903   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:04.383593   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:04.383684   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:04.398424   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:04.883982   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:04.884095   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:04.901344   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:01.341708   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:01.342186   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:01:01.342216   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:01:01.342138   60142 retry.go:31] will retry after 1.416825478s: waiting for machine to come up
	I1101 01:01:02.760851   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:02.761364   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:01:02.761391   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:01:02.761319   60142 retry.go:31] will retry after 1.783291063s: waiting for machine to come up
	I1101 01:01:04.546179   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:04.546731   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:01:04.546768   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:01:04.546684   60142 retry.go:31] will retry after 1.94150512s: waiting for machine to come up
	I1101 01:01:04.706156   58823 api_server.go:269] stopped: https://192.168.39.90:8443/healthz: Get "https://192.168.39.90:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 01:01:04.706226   58823 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I1101 01:01:05.474195   58823 api_server.go:279] https://192.168.39.90:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 01:01:05.474233   58823 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 01:01:05.975031   58823 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I1101 01:01:05.981753   58823 api_server.go:279] https://192.168.39.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1101 01:01:05.981796   58823 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1101 01:01:06.474331   58823 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I1101 01:01:06.483910   58823 api_server.go:279] https://192.168.39.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1101 01:01:06.483971   58823 api_server.go:103] status: https://192.168.39.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1101 01:01:06.974478   58823 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I1101 01:01:06.983225   58823 api_server.go:279] https://192.168.39.90:8443/healthz returned 200:
	ok
	I1101 01:01:06.992078   58823 api_server.go:141] control plane version: v1.16.0
	I1101 01:01:06.992104   58823 api_server.go:131] duration metric: took 7.286307099s to wait for apiserver health ...
	I1101 01:01:06.992112   58823 cni.go:84] Creating CNI manager for ""
	I1101 01:01:06.992118   58823 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:01:06.994180   58823 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:01:06.995961   58823 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:01:07.007478   58823 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:01:07.025029   58823 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:01:07.036645   58823 system_pods.go:59] 7 kube-system pods found
	I1101 01:01:07.036685   58823 system_pods.go:61] "coredns-5644d7b6d9-swhtm" [5c5eacff-9271-46c5-add0-a3931b67876b] Running
	I1101 01:01:07.036692   58823 system_pods.go:61] "etcd-old-k8s-version-330042" [0b703394-0d1c-419d-8e08-c2c299046293] Running
	I1101 01:01:07.036699   58823 system_pods.go:61] "kube-apiserver-old-k8s-version-330042" [0dcb0028-fa22-4107-afa1-fbdd14b615ab] Running
	I1101 01:01:07.036706   58823 system_pods.go:61] "kube-controller-manager-old-k8s-version-330042" [adc1372e-45e1-4365-a039-c06af715cb24] Running
	I1101 01:01:07.036712   58823 system_pods.go:61] "kube-proxy-h86m8" [6db2c8ff-26f9-4f22-9cbd-2405a81d9128] Running
	I1101 01:01:07.036718   58823 system_pods.go:61] "kube-scheduler-old-k8s-version-330042" [f3f78aa9-fcb1-4b87-b7fa-f86c44e801c0] Running
	I1101 01:01:07.036724   58823 system_pods.go:61] "storage-provisioner" [710e45b8-dab7-4bbc-9ce8-f607db4cb63e] Running
	I1101 01:01:07.036733   58823 system_pods.go:74] duration metric: took 11.681153ms to wait for pod list to return data ...
	I1101 01:01:07.036745   58823 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:01:07.043383   58823 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:01:07.043420   58823 node_conditions.go:123] node cpu capacity is 2
	I1101 01:01:07.043433   58823 node_conditions.go:105] duration metric: took 6.681589ms to run NodePressure ...
	I1101 01:01:07.043454   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:07.419893   58823 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1101 01:01:07.425342   58823 retry.go:31] will retry after 365.112122ms: kubelet not initialised
	I1101 01:01:03.491770   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:05.989935   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:05.383225   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:05.383308   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:05.399889   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:05.884036   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:05.884134   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:05.899867   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:06.383118   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:06.383241   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:06.399285   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:06.883379   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:06.883497   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:06.895160   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:07.383835   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:07.383951   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:07.401766   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:07.883254   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:07.883368   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:07.900024   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:08.383405   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:08.383494   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:08.401659   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:08.883099   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:08.883189   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:08.898348   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:09.383858   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:09.384003   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:09.396380   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:09.884003   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:09.884128   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:09.901031   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:06.489565   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:06.490176   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:01:06.490200   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:01:06.490117   60142 retry.go:31] will retry after 2.694877407s: waiting for machine to come up
	I1101 01:01:09.186086   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:09.186554   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:01:09.186584   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:01:09.186497   60142 retry.go:31] will retry after 2.651563817s: waiting for machine to come up
	I1101 01:01:07.799240   58823 retry.go:31] will retry after 519.025086ms: kubelet not initialised
	I1101 01:01:08.325024   58823 retry.go:31] will retry after 345.44325ms: kubelet not initialised
	I1101 01:01:08.674686   58823 retry.go:31] will retry after 665.113314ms: kubelet not initialised
	I1101 01:01:09.345867   58823 retry.go:31] will retry after 1.421023017s: kubelet not initialised
	I1101 01:01:10.773100   58823 retry.go:31] will retry after 1.15707988s: kubelet not initialised
	I1101 01:01:11.936215   58823 retry.go:31] will retry after 3.290674523s: kubelet not initialised
	I1101 01:01:08.490229   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:10.990967   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:12.991285   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:10.383739   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:10.383800   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:10.398972   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:10.882991   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:10.883089   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:10.897346   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:11.383976   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:11.384059   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:11.396332   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:11.883903   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:11.884020   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:11.897279   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:12.383675   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:12.383786   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:12.399623   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:12.883112   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:12.883191   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:12.895484   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:13.383069   59148 api_server.go:166] Checking apiserver status ...
	I1101 01:01:13.383181   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:13.395417   59148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:13.860229   59148 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1101 01:01:13.860262   59148 kubeadm.go:1128] stopping kube-system containers ...
	I1101 01:01:13.860277   59148 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 01:01:13.860360   59148 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:01:13.901712   59148 cri.go:89] found id: ""
	I1101 01:01:13.901809   59148 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 01:01:13.918956   59148 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:01:13.931401   59148 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:01:13.931477   59148 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:01:13.943486   59148 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1101 01:01:13.943512   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:14.077324   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:11.839684   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:11.840140   58676 main.go:141] libmachine: (no-preload-008483) DBG | unable to find current IP address of domain no-preload-008483 in network mk-no-preload-008483
	I1101 01:01:11.840169   58676 main.go:141] libmachine: (no-preload-008483) DBG | I1101 01:01:11.840105   60142 retry.go:31] will retry after 4.157820096s: waiting for machine to come up
	I1101 01:01:15.233157   58823 retry.go:31] will retry after 3.531336164s: kubelet not initialised
	I1101 01:01:15.490358   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:17.491953   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:16.001208   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.001765   58676 main.go:141] libmachine: (no-preload-008483) Found IP for machine: 192.168.50.140
	I1101 01:01:16.001790   58676 main.go:141] libmachine: (no-preload-008483) Reserving static IP address...
	I1101 01:01:16.001806   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has current primary IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.002298   58676 main.go:141] libmachine: (no-preload-008483) Reserved static IP address: 192.168.50.140
	I1101 01:01:16.002338   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "no-preload-008483", mac: "52:54:00:6c:aa:b5", ip: "192.168.50.140"} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.002357   58676 main.go:141] libmachine: (no-preload-008483) Waiting for SSH to be available...
	I1101 01:01:16.002381   58676 main.go:141] libmachine: (no-preload-008483) DBG | skip adding static IP to network mk-no-preload-008483 - found existing host DHCP lease matching {name: "no-preload-008483", mac: "52:54:00:6c:aa:b5", ip: "192.168.50.140"}
	I1101 01:01:16.002397   58676 main.go:141] libmachine: (no-preload-008483) DBG | Getting to WaitForSSH function...
	I1101 01:01:16.004912   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.005349   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.005387   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.005528   58676 main.go:141] libmachine: (no-preload-008483) DBG | Using SSH client type: external
	I1101 01:01:16.005550   58676 main.go:141] libmachine: (no-preload-008483) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa (-rw-------)
	I1101 01:01:16.005589   58676 main.go:141] libmachine: (no-preload-008483) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.140 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1101 01:01:16.005607   58676 main.go:141] libmachine: (no-preload-008483) DBG | About to run SSH command:
	I1101 01:01:16.005621   58676 main.go:141] libmachine: (no-preload-008483) DBG | exit 0
	I1101 01:01:16.100131   58676 main.go:141] libmachine: (no-preload-008483) DBG | SSH cmd err, output: <nil>: 
	I1101 01:01:16.100576   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetConfigRaw
	I1101 01:01:16.101304   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetIP
	I1101 01:01:16.104212   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.104482   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.104528   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.104710   58676 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/config.json ...
	I1101 01:01:16.104933   58676 machine.go:88] provisioning docker machine ...
	I1101 01:01:16.104951   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:01:16.105159   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetMachineName
	I1101 01:01:16.105351   58676 buildroot.go:166] provisioning hostname "no-preload-008483"
	I1101 01:01:16.105375   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetMachineName
	I1101 01:01:16.105551   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:16.107936   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.108287   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.108333   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.108422   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:16.108594   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:16.108734   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:16.108861   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:16.109041   58676 main.go:141] libmachine: Using SSH client type: native
	I1101 01:01:16.109531   58676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I1101 01:01:16.109557   58676 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-008483 && echo "no-preload-008483" | sudo tee /etc/hostname
	I1101 01:01:16.249893   58676 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-008483
	
	I1101 01:01:16.249924   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:16.253130   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.253531   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.253571   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.253879   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:16.254106   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:16.254304   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:16.254441   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:16.254608   58676 main.go:141] libmachine: Using SSH client type: native
	I1101 01:01:16.254965   58676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I1101 01:01:16.254987   58676 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-008483' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-008483/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-008483' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 01:01:16.386797   58676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 01:01:16.386834   58676 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7305/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7305/.minikube}
	I1101 01:01:16.386862   58676 buildroot.go:174] setting up certificates
	I1101 01:01:16.386870   58676 provision.go:83] configureAuth start
	I1101 01:01:16.386879   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetMachineName
	I1101 01:01:16.387149   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetIP
	I1101 01:01:16.390409   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.390812   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.390844   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.391055   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:16.393580   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.394122   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.394154   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.394352   58676 provision.go:138] copyHostCerts
	I1101 01:01:16.394425   58676 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem, removing ...
	I1101 01:01:16.394438   58676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem
	I1101 01:01:16.394506   58676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/ca.pem (1078 bytes)
	I1101 01:01:16.394646   58676 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem, removing ...
	I1101 01:01:16.394658   58676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem
	I1101 01:01:16.394690   58676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/cert.pem (1123 bytes)
	I1101 01:01:16.394774   58676 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem, removing ...
	I1101 01:01:16.394786   58676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem
	I1101 01:01:16.394811   58676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7305/.minikube/key.pem (1675 bytes)
	I1101 01:01:16.394874   58676 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem org=jenkins.no-preload-008483 san=[192.168.50.140 192.168.50.140 localhost 127.0.0.1 minikube no-preload-008483]
	I1101 01:01:16.593958   58676 provision.go:172] copyRemoteCerts
	I1101 01:01:16.594024   58676 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 01:01:16.594046   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:16.597073   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.597449   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.597484   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.597723   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:16.597956   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:16.598108   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:16.598247   58676 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa Username:docker}
	I1101 01:01:16.689574   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 01:01:16.714820   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1101 01:01:16.744383   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 01:01:16.769305   58676 provision.go:86] duration metric: configureAuth took 382.416455ms
	I1101 01:01:16.769338   58676 buildroot.go:189] setting minikube options for container-runtime
	I1101 01:01:16.769596   58676 config.go:182] Loaded profile config "no-preload-008483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:01:16.769692   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:16.773209   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.773565   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:16.773628   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:16.773828   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:16.774071   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:16.774353   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:16.774570   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:16.774772   58676 main.go:141] libmachine: Using SSH client type: native
	I1101 01:01:16.775132   58676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I1101 01:01:16.775150   58676 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 01:01:17.110397   58676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 01:01:17.110481   58676 machine.go:91] provisioned docker machine in 1.005532035s
	I1101 01:01:17.110500   58676 start.go:300] post-start starting for "no-preload-008483" (driver="kvm2")
	I1101 01:01:17.110513   58676 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 01:01:17.110559   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:01:17.110920   58676 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 01:01:17.110948   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:17.114342   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.114794   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:17.114829   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.115028   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:17.115227   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:17.115440   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:17.115621   58676 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa Username:docker}
	I1101 01:01:17.210514   58676 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 01:01:17.216393   58676 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 01:01:17.216423   58676 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/addons for local assets ...
	I1101 01:01:17.216522   58676 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7305/.minikube/files for local assets ...
	I1101 01:01:17.216640   58676 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem -> 145042.pem in /etc/ssl/certs
	I1101 01:01:17.216773   58676 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 01:01:17.229604   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:01:17.255095   58676 start.go:303] post-start completed in 144.577436ms
	I1101 01:01:17.255120   58676 fix.go:56] fixHost completed within 21.173509578s
	I1101 01:01:17.255192   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:17.258433   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.258833   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:17.258858   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.259085   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:17.259305   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:17.259478   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:17.259628   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:17.259825   58676 main.go:141] libmachine: Using SSH client type: native
	I1101 01:01:17.260306   58676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I1101 01:01:17.260321   58676 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1101 01:01:17.389718   58676 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698800477.337229135
	
	I1101 01:01:17.389748   58676 fix.go:206] guest clock: 1698800477.337229135
	I1101 01:01:17.389770   58676 fix.go:219] Guest: 2023-11-01 01:01:17.337229135 +0000 UTC Remote: 2023-11-01 01:01:17.255124581 +0000 UTC m=+361.362725964 (delta=82.104554ms)
	I1101 01:01:17.389797   58676 fix.go:190] guest clock delta is within tolerance: 82.104554ms
	I1101 01:01:17.389804   58676 start.go:83] releasing machines lock for "no-preload-008483", held for 21.308227601s
	I1101 01:01:17.389828   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:01:17.390149   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetIP
	I1101 01:01:17.393289   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.393692   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:17.393723   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.393937   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:01:17.394589   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:01:17.394780   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:01:17.394877   58676 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 01:01:17.394918   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:17.395060   58676 ssh_runner.go:195] Run: cat /version.json
	I1101 01:01:17.395115   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:01:17.398497   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:17.398497   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.398581   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:17.398642   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.398665   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.398700   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:17.398853   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:17.398861   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:17.398881   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:17.398995   58676 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa Username:docker}
	I1101 01:01:17.399475   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:01:17.399644   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:01:17.399798   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:01:17.399976   58676 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa Username:docker}
	I1101 01:01:17.524462   58676 ssh_runner.go:195] Run: systemctl --version
	I1101 01:01:17.530328   58676 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 01:01:17.678956   58676 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 01:01:17.686754   58676 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 01:01:17.686834   58676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 01:01:17.705358   58676 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 01:01:17.705388   58676 start.go:472] detecting cgroup driver to use...
	I1101 01:01:17.705527   58676 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 01:01:17.722410   58676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 01:01:17.739380   58676 docker.go:204] disabling cri-docker service (if available) ...
	I1101 01:01:17.739443   58676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 01:01:17.755953   58676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 01:01:17.769672   58676 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 01:01:17.900801   58676 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 01:01:18.027283   58676 docker.go:220] disabling docker service ...
	I1101 01:01:18.027378   58676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 01:01:18.041230   58676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 01:01:18.052784   58676 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 01:01:18.165341   58676 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 01:01:18.276403   58676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 01:01:18.289618   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 01:01:18.308480   58676 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 01:01:18.308562   58676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:01:18.318597   58676 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 01:01:18.318673   58676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:01:18.328312   58676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:01:18.340054   58676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:01:18.351854   58676 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 01:01:18.364129   58676 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 01:01:18.372789   58676 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 01:01:18.372879   58676 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 01:01:18.385792   58676 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 01:01:18.394803   58676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 01:01:18.503941   58676 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 01:01:18.687034   58676 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 01:01:18.687137   58676 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 01:01:18.691750   58676 start.go:540] Will wait 60s for crictl version
	I1101 01:01:18.691818   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:18.695752   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 01:01:18.735012   58676 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1101 01:01:18.735098   58676 ssh_runner.go:195] Run: crio --version
	I1101 01:01:18.782835   58676 ssh_runner.go:195] Run: crio --version
	I1101 01:01:18.829727   58676 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1101 01:01:15.054547   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:15.248625   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:15.325492   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:15.396782   59148 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:01:15.396854   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:15.420220   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:15.941271   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:16.441997   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:16.942240   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:17.441850   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:17.941784   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:17.965191   59148 api_server.go:72] duration metric: took 2.5684081s to wait for apiserver process to appear ...
	I1101 01:01:17.965220   59148 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:01:17.965238   59148 api_server.go:253] Checking apiserver healthz at https://192.168.72.97:8444/healthz ...
	I1101 01:01:18.831303   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetIP
	I1101 01:01:18.834574   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:18.834969   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:01:18.835003   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:01:18.835233   58676 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1101 01:01:18.839259   58676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:01:18.853665   58676 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 01:01:18.853725   58676 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:01:18.890995   58676 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1101 01:01:18.891024   58676 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.3 registry.k8s.io/kube-controller-manager:v1.28.3 registry.k8s.io/kube-scheduler:v1.28.3 registry.k8s.io/kube-proxy:v1.28.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1101 01:01:18.891130   58676 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1101 01:01:18.891143   58676 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.3
	I1101 01:01:18.891144   58676 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1101 01:01:18.891201   58676 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1101 01:01:18.891263   58676 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.3
	I1101 01:01:18.891397   58676 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.3
	I1101 01:01:18.891415   58676 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1101 01:01:18.891134   58676 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:01:18.892729   58676 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.3
	I1101 01:01:18.892742   58676 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:01:18.892747   58676 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1101 01:01:18.892760   58676 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1101 01:01:18.892760   58676 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1101 01:01:18.892729   58676 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.3
	I1101 01:01:18.892790   58676 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.3
	I1101 01:01:18.892835   58676 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1101 01:01:19.112836   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1101 01:01:19.131170   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.3
	I1101 01:01:19.147328   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.3
	I1101 01:01:19.148513   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I1101 01:01:19.155909   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.3
	I1101 01:01:19.163598   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.3
	I1101 01:01:19.166436   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I1101 01:01:19.290823   58676 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.3" needs transfer: "registry.k8s.io/kube-proxy:v1.28.3" does not exist at hash "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf" in container runtime
	I1101 01:01:19.290888   58676 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.3
	I1101 01:01:19.290943   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:19.331622   58676 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.3" does not exist at hash "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076" in container runtime
	I1101 01:01:19.331709   58676 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.3" does not exist at hash "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4" in container runtime
	I1101 01:01:19.331776   58676 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.3
	I1101 01:01:19.331717   58676 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.3
	I1101 01:01:19.331872   58676 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.3" does not exist at hash "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3" in container runtime
	I1101 01:01:19.331899   58676 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1101 01:01:19.331905   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:19.331645   58676 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I1101 01:01:19.331979   58676 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I1101 01:01:19.331986   58676 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I1101 01:01:19.332011   58676 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I1101 01:01:19.332023   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:19.331945   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:19.332053   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:19.332040   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.3
	I1101 01:01:19.331842   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:19.342099   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.3
	I1101 01:01:19.396521   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I1101 01:01:19.396603   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.3
	I1101 01:01:19.396612   58676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3
	I1101 01:01:19.396628   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.3
	I1101 01:01:19.396681   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I1101 01:01:19.396700   58676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.3
	I1101 01:01:19.396750   58676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3
	I1101 01:01:19.396842   58676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1101 01:01:19.497732   58676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.3 (exists)
	I1101 01:01:19.497756   58676 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.3
	I1101 01:01:19.497784   58676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I1101 01:01:19.497813   58676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3
	I1101 01:01:19.497871   58676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I1101 01:01:19.497924   58676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3
	I1101 01:01:19.497964   58676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.3 (exists)
	I1101 01:01:19.498009   58676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1101 01:01:19.498015   58676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3
	I1101 01:01:19.498054   58676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I1101 01:01:19.498111   58676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I1101 01:01:19.498117   58676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1101 01:01:19.764214   58676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:01:18.769797   58823 retry.go:31] will retry after 5.956460089s: kubelet not initialised
	I1101 01:01:19.987384   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:21.989585   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:22.277798   59148 api_server.go:279] https://192.168.72.97:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 01:01:22.277829   59148 api_server.go:103] status: https://192.168.72.97:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 01:01:22.277839   59148 api_server.go:253] Checking apiserver healthz at https://192.168.72.97:8444/healthz ...
	I1101 01:01:22.371756   59148 api_server.go:279] https://192.168.72.97:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 01:01:22.371796   59148 api_server.go:103] status: https://192.168.72.97:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 01:01:22.872332   59148 api_server.go:253] Checking apiserver healthz at https://192.168.72.97:8444/healthz ...
	I1101 01:01:22.884543   59148 api_server.go:279] https://192.168.72.97:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:01:22.884587   59148 api_server.go:103] status: https://192.168.72.97:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:01:23.372033   59148 api_server.go:253] Checking apiserver healthz at https://192.168.72.97:8444/healthz ...
	I1101 01:01:23.381608   59148 api_server.go:279] https://192.168.72.97:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:01:23.381657   59148 api_server.go:103] status: https://192.168.72.97:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:01:23.872319   59148 api_server.go:253] Checking apiserver healthz at https://192.168.72.97:8444/healthz ...
	I1101 01:01:23.879515   59148 api_server.go:279] https://192.168.72.97:8444/healthz returned 200:
	ok
	I1101 01:01:23.892376   59148 api_server.go:141] control plane version: v1.28.3
	I1101 01:01:23.892412   59148 api_server.go:131] duration metric: took 5.927178892s to wait for apiserver health ...
	I1101 01:01:23.892424   59148 cni.go:84] Creating CNI manager for ""
	I1101 01:01:23.892433   59148 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:01:23.894577   59148 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:01:23.896163   59148 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:01:23.928482   59148 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:01:23.952485   59148 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:01:23.968054   59148 system_pods.go:59] 8 kube-system pods found
	I1101 01:01:23.968095   59148 system_pods.go:61] "coredns-5dd5756b68-lmxx8" [c74c5ddc-56a8-422c-a140-1fdd14ef817d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 01:01:23.968115   59148 system_pods.go:61] "etcd-default-k8s-diff-port-639310" [1baf2571-f6c6-43bc-8051-e72f7eb4ed70] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 01:01:23.968126   59148 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-639310" [9cbc66c6-7c66-4b24-9400-a5add2edec14] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 01:01:23.968145   59148 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-639310" [99945be6-6fb8-4da6-8c6a-c25a2023d2d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 01:01:23.968158   59148 system_pods.go:61] "kube-proxy-f45wg" [abe74c94-5140-4c35-a141-d995652948e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 01:01:23.968167   59148 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-639310" [299c1962-1945-4525-90c7-384d515dc4e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 01:01:23.968188   59148 system_pods.go:61] "metrics-server-57f55c9bc5-6szl7" [1e00ef03-d5f4-4e8b-a247-8c31a5492f9e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:01:23.968201   59148 system_pods.go:61] "storage-provisioner" [fe2e7631-0564-44d2-afbd-578fb37f6a04] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 01:01:23.968215   59148 system_pods.go:74] duration metric: took 15.694719ms to wait for pod list to return data ...
	I1101 01:01:23.968224   59148 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:01:23.972141   59148 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:01:23.972177   59148 node_conditions.go:123] node cpu capacity is 2
	I1101 01:01:23.972191   59148 node_conditions.go:105] duration metric: took 3.96106ms to run NodePressure ...
	I1101 01:01:23.972214   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:24.253558   59148 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1101 01:01:24.258842   59148 kubeadm.go:787] kubelet initialised
	I1101 01:01:24.258869   59148 kubeadm.go:788] duration metric: took 5.283339ms waiting for restarted kubelet to initialise ...
	I1101 01:01:24.258878   59148 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:01:24.265507   59148 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-lmxx8" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:24.271381   59148 pod_ready.go:97] node "default-k8s-diff-port-639310" hosting pod "coredns-5dd5756b68-lmxx8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.271408   59148 pod_ready.go:81] duration metric: took 5.876802ms waiting for pod "coredns-5dd5756b68-lmxx8" in "kube-system" namespace to be "Ready" ...
	E1101 01:01:24.271418   59148 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-639310" hosting pod "coredns-5dd5756b68-lmxx8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.271426   59148 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:24.277446   59148 pod_ready.go:97] node "default-k8s-diff-port-639310" hosting pod "etcd-default-k8s-diff-port-639310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.277476   59148 pod_ready.go:81] duration metric: took 6.04229ms waiting for pod "etcd-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	E1101 01:01:24.277487   59148 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-639310" hosting pod "etcd-default-k8s-diff-port-639310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.277495   59148 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:24.283557   59148 pod_ready.go:97] node "default-k8s-diff-port-639310" hosting pod "kube-apiserver-default-k8s-diff-port-639310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.283604   59148 pod_ready.go:81] duration metric: took 6.094277ms waiting for pod "kube-apiserver-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	E1101 01:01:24.283617   59148 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-639310" hosting pod "kube-apiserver-default-k8s-diff-port-639310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.283630   59148 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:24.357249   59148 pod_ready.go:97] node "default-k8s-diff-port-639310" hosting pod "kube-controller-manager-default-k8s-diff-port-639310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.357288   59148 pod_ready.go:81] duration metric: took 73.64295ms waiting for pod "kube-controller-manager-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	E1101 01:01:24.357302   59148 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-639310" hosting pod "kube-controller-manager-default-k8s-diff-port-639310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-639310" has status "Ready":"False"
	I1101 01:01:24.357319   59148 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f45wg" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:21.457919   58676 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0: (1.960002941s)
	I1101 01:01:21.457955   58676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I1101 01:01:21.458111   58676 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.3: (1.960074529s)
	I1101 01:01:21.458140   58676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.3 (exists)
	I1101 01:01:21.458152   58676 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.3: (1.960014372s)
	I1101 01:01:21.458176   58676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.3 (exists)
	I1101 01:01:21.458226   58676 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1: (1.960094366s)
	I1101 01:01:21.458252   58676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I1101 01:01:21.458267   58676 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.694021872s)
	I1101 01:01:21.458306   58676 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1101 01:01:21.458344   58676 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:01:21.458392   58676 ssh_runner.go:195] Run: which crictl
	I1101 01:01:21.458644   58676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3: (1.960815533s)
	I1101 01:01:21.458659   58676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3 from cache
	I1101 01:01:21.458686   58676 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1101 01:01:21.458718   58676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1101 01:01:21.462463   58676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:01:23.757842   58676 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.295346464s)
	I1101 01:01:23.757911   58676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1101 01:01:23.757849   58676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3: (2.299099605s)
	I1101 01:01:23.757965   58676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3 from cache
	I1101 01:01:23.758006   58676 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I1101 01:01:23.758025   58676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1101 01:01:23.758040   58676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I1101 01:01:23.764726   58676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1101 01:01:24.732471   58823 retry.go:31] will retry after 9.584941607s: kubelet not initialised
	I1101 01:01:23.990727   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:26.489463   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:25.156181   59148 pod_ready.go:92] pod "kube-proxy-f45wg" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:25.156211   59148 pod_ready.go:81] duration metric: took 798.883976ms waiting for pod "kube-proxy-f45wg" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:25.156225   59148 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:27.476794   59148 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-639310" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:29.974327   59148 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-639310" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:29.974364   59148 pod_ready.go:81] duration metric: took 4.818128166s waiting for pod "kube-scheduler-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:29.974381   59148 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:28.990433   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:30.991378   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:32.004594   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:34.006695   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:31.399348   58676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (7.641283444s)
	I1101 01:01:31.399378   58676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I1101 01:01:31.399412   58676 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1101 01:01:31.399465   58676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1101 01:01:33.857323   58676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3: (2.45781579s)
	I1101 01:01:33.857356   58676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3 from cache
	I1101 01:01:33.857384   58676 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1101 01:01:33.857444   58676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1101 01:01:34.322788   58823 retry.go:31] will retry after 7.673111332s: kubelet not initialised
	I1101 01:01:33.488934   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:35.489417   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:37.989455   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:36.506432   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:39.004133   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:36.328716   58676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3: (2.471243195s)
	I1101 01:01:36.328755   58676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3 from cache
	I1101 01:01:36.328788   58676 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I1101 01:01:36.328839   58676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I1101 01:01:37.691820   58676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.362944664s)
	I1101 01:01:37.691851   58676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I1101 01:01:37.691877   58676 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1101 01:01:37.691978   58676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1101 01:01:38.442125   58676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17486-7305/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1101 01:01:38.442181   58676 cache_images.go:123] Successfully loaded all cached images
	I1101 01:01:38.442188   58676 cache_images.go:92] LoadImages completed in 19.55115042s
	I1101 01:01:38.442260   58676 ssh_runner.go:195] Run: crio config
	I1101 01:01:38.499778   58676 cni.go:84] Creating CNI manager for ""
	I1101 01:01:38.499799   58676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:01:38.499820   58676 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 01:01:38.499846   58676 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.140 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-008483 NodeName:no-preload-008483 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.140"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.140 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 01:01:38.500007   58676 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.140
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-008483"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.140
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.140"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 01:01:38.500076   58676 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-008483 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:no-preload-008483 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 01:01:38.500135   58676 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 01:01:38.510073   58676 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 01:01:38.510160   58676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 01:01:38.517853   58676 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1101 01:01:38.534085   58676 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 01:01:38.549312   58676 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I1101 01:01:38.566320   58676 ssh_runner.go:195] Run: grep 192.168.50.140	control-plane.minikube.internal$ /etc/hosts
	I1101 01:01:38.569923   58676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.140	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:01:38.582147   58676 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483 for IP: 192.168.50.140
	I1101 01:01:38.582180   58676 certs.go:190] acquiring lock for shared ca certs: {Name:mk6185b3938c4b51e80f57b7c81926c81b632b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:01:38.582353   58676 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key
	I1101 01:01:38.582412   58676 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key
	I1101 01:01:38.582512   58676 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/client.key
	I1101 01:01:38.582596   58676 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/apiserver.key.306fa7af
	I1101 01:01:38.582664   58676 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/proxy-client.key
	I1101 01:01:38.582841   58676 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem (1338 bytes)
	W1101 01:01:38.582887   58676 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504_empty.pem, impossibly tiny 0 bytes
	I1101 01:01:38.582903   58676 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 01:01:38.582941   58676 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/ca.pem (1078 bytes)
	I1101 01:01:38.582978   58676 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/cert.pem (1123 bytes)
	I1101 01:01:38.583015   58676 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/certs/home/jenkins/minikube-integration/17486-7305/.minikube/certs/key.pem (1675 bytes)
	I1101 01:01:38.583082   58676 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem (1708 bytes)
	I1101 01:01:38.583827   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 01:01:38.607306   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 01:01:38.631666   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 01:01:38.655201   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/no-preload-008483/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 01:01:38.678237   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 01:01:38.700410   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 01:01:38.726807   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 01:01:38.752672   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 01:01:38.776285   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 01:01:38.799902   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/certs/14504.pem --> /usr/share/ca-certificates/14504.pem (1338 bytes)
	I1101 01:01:38.823790   58676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/ssl/certs/145042.pem --> /usr/share/ca-certificates/145042.pem (1708 bytes)
	I1101 01:01:38.847407   58676 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 01:01:38.863594   58676 ssh_runner.go:195] Run: openssl version
	I1101 01:01:38.869214   58676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14504.pem && ln -fs /usr/share/ca-certificates/14504.pem /etc/ssl/certs/14504.pem"
	I1101 01:01:38.878725   58676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14504.pem
	I1101 01:01:38.883007   58676 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:54 /usr/share/ca-certificates/14504.pem
	I1101 01:01:38.883069   58676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14504.pem
	I1101 01:01:38.888251   58676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14504.pem /etc/ssl/certs/51391683.0"
	I1101 01:01:38.899894   58676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145042.pem && ln -fs /usr/share/ca-certificates/145042.pem /etc/ssl/certs/145042.pem"
	I1101 01:01:38.909658   58676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145042.pem
	I1101 01:01:38.914011   58676 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:54 /usr/share/ca-certificates/145042.pem
	I1101 01:01:38.914088   58676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145042.pem
	I1101 01:01:38.919323   58676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145042.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 01:01:38.928836   58676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 01:01:38.937988   58676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:01:38.943540   58676 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:01:38.943607   58676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:01:38.949543   58676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 01:01:38.959098   58676 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 01:01:38.963149   58676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 01:01:38.968868   58676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 01:01:38.974315   58676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 01:01:38.979746   58676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 01:01:38.985852   58676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 01:01:38.991864   58676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 01:01:38.998153   58676 kubeadm.go:404] StartCluster: {Name:no-preload-008483 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:no-preload-008483 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.140 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 01:01:38.998271   58676 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 01:01:38.998340   58676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:01:39.045797   58676 cri.go:89] found id: ""
	I1101 01:01:39.045870   58676 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 01:01:39.056166   58676 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1101 01:01:39.056197   58676 kubeadm.go:636] restartCluster start
	I1101 01:01:39.056252   58676 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 01:01:39.065191   58676 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:39.066337   58676 kubeconfig.go:92] found "no-preload-008483" server: "https://192.168.50.140:8443"
	I1101 01:01:39.068843   58676 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 01:01:39.077558   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:39.077606   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:39.088105   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:39.088123   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:39.088168   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:39.100203   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:39.600957   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:39.601029   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:39.612652   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:40.101101   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:40.101191   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:40.113249   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:40.600487   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:40.600552   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:40.612183   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:42.002176   58823 kubeadm.go:787] kubelet initialised
	I1101 01:01:42.002198   58823 kubeadm.go:788] duration metric: took 34.582278796s waiting for restarted kubelet to initialise ...
	I1101 01:01:42.002211   58823 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:01:42.007691   58823 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-m8mn8" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.012657   58823 pod_ready.go:92] pod "coredns-5644d7b6d9-m8mn8" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:42.012677   58823 pod_ready.go:81] duration metric: took 4.961011ms waiting for pod "coredns-5644d7b6d9-m8mn8" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.012687   58823 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-swhtm" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.017099   58823 pod_ready.go:92] pod "coredns-5644d7b6d9-swhtm" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:42.017124   58823 pod_ready.go:81] duration metric: took 4.429709ms waiting for pod "coredns-5644d7b6d9-swhtm" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.017137   58823 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.021376   58823 pod_ready.go:92] pod "etcd-old-k8s-version-330042" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:42.021403   58823 pod_ready.go:81] duration metric: took 4.25772ms waiting for pod "etcd-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.021415   58823 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.026057   58823 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-330042" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:42.026080   58823 pod_ready.go:81] duration metric: took 4.65685ms waiting for pod "kube-apiserver-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.026096   58823 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.401057   58823 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-330042" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:42.401085   58823 pod_ready.go:81] duration metric: took 374.980275ms waiting for pod "kube-controller-manager-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.401099   58823 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-h86m8" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:40.487876   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:42.488609   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:41.504485   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:44.005180   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:41.100662   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:41.100773   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:41.113339   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:41.601121   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:41.601195   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:41.613986   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:42.101110   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:42.101188   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:42.113963   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:42.600356   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:42.600458   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:42.612154   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:43.100679   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:43.100767   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:43.113009   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:43.601328   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:43.601402   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:43.612862   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:44.101146   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:44.101261   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:44.113407   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:44.600812   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:44.600955   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:44.613161   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:45.100665   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:45.100769   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:45.112905   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:45.600416   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:45.600515   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:45.612930   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:42.801878   58823 pod_ready.go:92] pod "kube-proxy-h86m8" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:42.801899   58823 pod_ready.go:81] duration metric: took 400.793617ms waiting for pod "kube-proxy-h86m8" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:42.801907   58823 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:43.201586   58823 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-330042" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:43.201618   58823 pod_ready.go:81] duration metric: took 399.702904ms waiting for pod "kube-scheduler-old-k8s-version-330042" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:43.201632   58823 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:45.508037   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:44.489092   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:46.493162   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:46.506251   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:49.004539   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:46.100957   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:46.101023   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:46.113645   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:46.600681   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:46.600781   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:46.612564   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:47.101090   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:47.101156   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:47.113500   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:47.601105   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:47.601244   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:47.613091   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:48.100608   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:48.100725   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:48.112995   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:48.600520   58676 api_server.go:166] Checking apiserver status ...
	I1101 01:01:48.600603   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 01:01:48.612240   58676 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 01:01:49.077973   58676 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1101 01:01:49.078017   58676 kubeadm.go:1128] stopping kube-system containers ...
	I1101 01:01:49.078031   58676 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 01:01:49.078097   58676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:01:49.117615   58676 cri.go:89] found id: ""
	I1101 01:01:49.117689   58676 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 01:01:49.133583   58676 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:01:49.142851   58676 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:01:49.142922   58676 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:01:49.151952   58676 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1101 01:01:49.151973   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:49.270827   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:50.046638   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:50.252510   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:50.327660   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:50.398419   58676 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:01:50.398511   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:50.415262   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:50.931672   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:47.508466   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:49.509032   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:51.510816   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:48.987561   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:50.989519   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:52.989978   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:51.004704   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:53.006138   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:51.431168   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:51.931127   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:52.431292   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:01:52.462617   58676 api_server.go:72] duration metric: took 2.064198698s to wait for apiserver process to appear ...
	I1101 01:01:52.462644   58676 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:01:52.462658   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:52.463297   58676 api_server.go:269] stopped: https://192.168.50.140:8443/healthz: Get "https://192.168.50.140:8443/healthz": dial tcp 192.168.50.140:8443: connect: connection refused
	I1101 01:01:52.463360   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:52.463831   58676 api_server.go:269] stopped: https://192.168.50.140:8443/healthz: Get "https://192.168.50.140:8443/healthz": dial tcp 192.168.50.140:8443: connect: connection refused
	I1101 01:01:52.964290   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:54.007720   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:56.012280   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:56.353340   58676 api_server.go:279] https://192.168.50.140:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 01:01:56.353399   58676 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 01:01:56.353416   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:56.404133   58676 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:01:56.404176   58676 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:01:56.464272   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:56.470496   58676 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:01:56.470553   58676 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:01:56.964058   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:56.975831   58676 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:01:56.975877   58676 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:01:57.464038   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:57.472652   58676 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 01:01:57.472697   58676 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 01:01:57.964020   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:01:57.970866   58676 api_server.go:279] https://192.168.50.140:8443/healthz returned 200:
	ok
	I1101 01:01:57.979612   58676 api_server.go:141] control plane version: v1.28.3
	I1101 01:01:57.979641   58676 api_server.go:131] duration metric: took 5.516990946s to wait for apiserver health ...
	I1101 01:01:57.979650   58676 cni.go:84] Creating CNI manager for ""
	I1101 01:01:57.979657   58676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:01:57.981694   58676 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:01:54.990377   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:57.489817   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:55.505767   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:57.505977   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:00.004800   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:57.983198   58676 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:01:58.006916   58676 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:01:58.035969   58676 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:01:58.047783   58676 system_pods.go:59] 8 kube-system pods found
	I1101 01:01:58.047833   58676 system_pods.go:61] "coredns-5dd5756b68-kcjf2" [e5cba8fe-f5c0-48cd-a21b-649caf4405cd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 01:01:58.047848   58676 system_pods.go:61] "etcd-no-preload-008483" [6e8ce64d-5c27-4528-9ecb-4bd1c2ab55c9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 01:01:58.047868   58676 system_pods.go:61] "kube-apiserver-no-preload-008483" [c320b03e-f364-4b38-8f09-5239d66f90e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 01:01:58.047881   58676 system_pods.go:61] "kube-controller-manager-no-preload-008483" [b89beee3-61e6-4efa-926f-43ae6a50e44b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 01:01:58.047893   58676 system_pods.go:61] "kube-proxy-xjfsj" [a7195683-b9ee-440c-82e6-efcd325a35e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 01:01:58.047907   58676 system_pods.go:61] "kube-scheduler-no-preload-008483" [d8c6a1f5-ceca-46af-9a40-22053f5387b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 01:01:58.047920   58676 system_pods.go:61] "metrics-server-57f55c9bc5-49wtw" [b87d5491-9981-48d5-9cf8-34dbd4b24435] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:01:58.047946   58676 system_pods.go:61] "storage-provisioner" [bf9d5910-ae5f-48f9-9358-54b2068c2e2c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 01:01:58.047959   58676 system_pods.go:74] duration metric: took 11.96541ms to wait for pod list to return data ...
	I1101 01:01:58.047971   58676 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:01:58.052170   58676 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:01:58.052205   58676 node_conditions.go:123] node cpu capacity is 2
	I1101 01:01:58.052218   58676 node_conditions.go:105] duration metric: took 4.239786ms to run NodePressure ...
	I1101 01:01:58.052237   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 01:01:58.340580   58676 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1101 01:01:58.351480   58676 kubeadm.go:787] kubelet initialised
	I1101 01:01:58.351510   58676 kubeadm.go:788] duration metric: took 10.903426ms waiting for restarted kubelet to initialise ...
	I1101 01:01:58.351520   58676 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:01:58.359099   58676 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-kcjf2" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:00.383123   58676 pod_ready.go:102] pod "coredns-5dd5756b68-kcjf2" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:58.509858   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:01.009429   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:01:59.988392   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:01.989042   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:02.505009   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:05.004485   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:02.880623   58676 pod_ready.go:102] pod "coredns-5dd5756b68-kcjf2" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:04.878534   58676 pod_ready.go:92] pod "coredns-5dd5756b68-kcjf2" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:04.878556   58676 pod_ready.go:81] duration metric: took 6.519426334s waiting for pod "coredns-5dd5756b68-kcjf2" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:04.878565   58676 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:03.508377   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:05.508570   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:03.990099   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:06.488196   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:07.005182   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:09.505205   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:06.907992   58676 pod_ready.go:102] pod "etcd-no-preload-008483" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:09.400005   58676 pod_ready.go:102] pod "etcd-no-preload-008483" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:09.900354   58676 pod_ready.go:92] pod "etcd-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:09.900379   58676 pod_ready.go:81] duration metric: took 5.021808339s waiting for pod "etcd-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.900394   58676 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.906496   58676 pod_ready.go:92] pod "kube-apiserver-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:09.906520   58676 pod_ready.go:81] duration metric: took 6.117499ms waiting for pod "kube-apiserver-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.906532   58676 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.911830   58676 pod_ready.go:92] pod "kube-controller-manager-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:09.911850   58676 pod_ready.go:81] duration metric: took 5.311751ms waiting for pod "kube-controller-manager-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.911859   58676 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xjfsj" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.916419   58676 pod_ready.go:92] pod "kube-proxy-xjfsj" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:09.916442   58676 pod_ready.go:81] duration metric: took 4.576855ms waiting for pod "kube-proxy-xjfsj" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.916454   58676 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.921501   58676 pod_ready.go:92] pod "kube-scheduler-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:09.921525   58676 pod_ready.go:81] duration metric: took 5.064522ms waiting for pod "kube-scheduler-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:09.921536   58676 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:07.514883   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:10.008399   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:08.490011   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:10.988504   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:12.989076   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:11.507014   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:13.509053   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:12.205003   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:14.705621   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:12.509113   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:15.009543   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:15.487844   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:17.488178   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:16.003423   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:18.003597   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:20.004472   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:17.205434   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:19.214743   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:17.508997   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:20.008838   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:22.009023   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:19.488902   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:21.988210   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:22.004908   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:24.503394   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:21.704199   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:23.704855   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:25.705319   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:24.508980   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:27.008249   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:23.988985   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:26.489079   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:26.504752   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:28.505579   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:27.709065   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:30.205608   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:29.507299   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:31.509017   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:28.988567   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:31.488567   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:30.507770   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:33.005199   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:32.707783   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:35.206392   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:34.007977   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:36.008250   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:33.988120   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:36.489908   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:35.503482   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:37.504132   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:39.504348   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:37.704511   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:39.705791   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:38.008778   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:40.509040   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:38.987615   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:40.988646   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:42.005253   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:44.008492   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:42.206082   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:44.704875   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:43.009095   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:45.508557   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:43.489792   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:45.987971   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:47.989322   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:46.504096   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:49.004605   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:47.205736   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:49.704264   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:47.510014   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:50.009950   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:50.489334   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:52.987877   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:51.005543   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:53.504243   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:52.205173   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:54.704843   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:52.509247   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:55.009346   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:55.488330   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:57.987845   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:55.504494   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:58.003674   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:00.004598   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:57.205092   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:59.705637   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:57.522422   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:00.007902   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:02.009964   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:02:59.987956   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:01.989730   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:02.005953   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:04.007095   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:02.205761   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:04.704065   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:04.508531   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:06.512303   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:04.487667   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:06.487854   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:06.503630   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:08.504993   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:06.704568   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:08.705012   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:09.008519   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:11.509450   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:08.488843   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:10.987614   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:12.989824   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:10.505932   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:13.005799   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:11.203683   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:13.204241   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:15.705287   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:14.008244   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:16.009433   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:15.488278   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:17.988683   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:15.503739   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:17.506253   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:20.004613   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:18.204056   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:20.205312   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:18.009706   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:20.508744   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:20.490044   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:22.989002   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:22.504922   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:25.004156   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:22.704711   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:25.205072   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:23.008359   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:25.509196   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:25.487961   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:27.488324   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:27.008179   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:29.504182   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:27.205671   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:29.208402   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:27.509247   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:30.008627   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:29.988286   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:32.487504   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:31.504973   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:34.004168   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:31.704298   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:33.704452   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:32.507959   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:35.008631   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:37.009271   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:34.488458   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:36.488759   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:36.503146   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:38.504444   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:36.204750   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:38.705346   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:39.507406   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:41.509812   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:38.988439   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:41.491496   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:40.505301   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:42.506003   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:45.004872   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:41.204015   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:43.206055   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:45.705597   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:44.008441   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:46.009900   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:43.987813   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:45.988508   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:47.989201   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:47.505799   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:49.506424   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:48.204686   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:50.704155   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:48.511303   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:51.008360   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:50.488123   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:52.488356   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:52.004387   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:54.505016   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:52.705891   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:54.706732   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:53.008988   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:55.507791   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:54.988620   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:56.990186   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:57.005565   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:59.505220   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:57.205342   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:59.215160   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:57.508013   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:59.509883   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:01.510115   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:03:59.490512   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:01.988008   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:02.004869   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:04.503903   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:01.704963   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:04.204466   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:04.007146   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:06.007815   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:04.488270   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:06.987544   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:06.505818   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:09.006093   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:06.205560   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:08.703961   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:10.705037   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:08.008817   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:10.508585   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:08.988223   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:10.989742   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:12.990669   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:11.503914   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:13.504018   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:13.206290   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:15.704820   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:13.008696   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:15.010312   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:15.487596   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:17.489381   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:15.505665   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:18.004825   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:20.004966   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:18.205022   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:20.703582   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:17.508842   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:20.008489   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:22.008572   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:19.988378   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:22.490000   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:22.005055   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:24.504050   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:22.704263   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:24.704479   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:24.507893   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:27.009371   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:24.988500   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:27.490306   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:26.504850   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:29.003907   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:27.204442   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:29.204906   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:29.508234   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:31.508285   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:29.988549   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:32.490618   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:31.504800   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:33.506025   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:31.704974   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:34.204565   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:33.512784   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:36.009709   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:34.988579   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:37.491535   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:36.011080   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:38.503881   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:36.204772   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:38.205329   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:40.707128   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:38.509404   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:41.009915   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:39.988897   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:42.487751   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:40.504606   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:42.504912   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:44.505101   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:43.205005   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:45.207096   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:43.507714   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:45.508866   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:44.988852   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:47.488268   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:47.004069   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:49.005029   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:47.704762   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:49.705584   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:48.009495   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:50.508392   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:49.488880   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:51.988841   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:51.504680   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:54.010010   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:52.204554   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:54.705101   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:53.008194   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:55.008373   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:57.009351   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:54.489702   58730 pod_ready.go:102] pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:56.389066   58730 pod_ready.go:81] duration metric: took 4m0.000951404s waiting for pod "metrics-server-57f55c9bc5-znchz" in "kube-system" namespace to be "Ready" ...
	E1101 01:04:56.389116   58730 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1101 01:04:56.389139   58730 pod_ready.go:38] duration metric: took 4m11.103640013s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:04:56.389173   58730 kubeadm.go:640] restartCluster took 4m34.207263569s
	W1101 01:04:56.389254   58730 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1101 01:04:56.389292   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1101 01:04:56.504421   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:58.505542   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:56.705911   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:58.706099   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:00.706478   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:04:59.509462   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:02.009472   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:00.509320   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:03.007708   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:03.203884   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:05.204356   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:04.009580   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:06.508160   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:05.505057   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:07.506811   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:10.004080   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:07.205229   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:09.206089   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:08.509319   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:11.009099   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:12.261608   58730 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (15.872291337s)
	I1101 01:05:12.261694   58730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:05:12.275334   58730 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:05:12.284969   58730 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:05:12.295834   58730 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:05:12.295881   58730 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1101 01:05:12.526039   58730 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 01:05:12.005261   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:14.005683   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:11.706864   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:14.204758   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:13.508597   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:16.008784   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:16.506282   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:19.004037   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:16.205361   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:18.704890   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:18.008878   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:20.009861   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:23.201664   58730 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1101 01:05:23.201785   58730 kubeadm.go:322] [preflight] Running pre-flight checks
	I1101 01:05:23.201920   58730 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 01:05:23.202057   58730 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 01:05:23.202178   58730 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 01:05:23.202255   58730 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 01:05:23.204179   58730 out.go:204]   - Generating certificates and keys ...
	I1101 01:05:23.204304   58730 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1101 01:05:23.204384   58730 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1101 01:05:23.204480   58730 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1101 01:05:23.204557   58730 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1101 01:05:23.204639   58730 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1101 01:05:23.204715   58730 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1101 01:05:23.204792   58730 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1101 01:05:23.204884   58730 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1101 01:05:23.205007   58730 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1101 01:05:23.205133   58730 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1101 01:05:23.205195   58730 kubeadm.go:322] [certs] Using the existing "sa" key
	I1101 01:05:23.205273   58730 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 01:05:23.205332   58730 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 01:05:23.205391   58730 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 01:05:23.205461   58730 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 01:05:23.205550   58730 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 01:05:23.205656   58730 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 01:05:23.205734   58730 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 01:05:23.207792   58730 out.go:204]   - Booting up control plane ...
	I1101 01:05:23.207914   58730 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 01:05:23.208028   58730 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 01:05:23.208124   58730 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 01:05:23.208244   58730 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 01:05:23.208322   58730 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 01:05:23.208356   58730 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1101 01:05:23.208496   58730 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 01:05:23.208569   58730 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003034 seconds
	I1101 01:05:23.208662   58730 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 01:05:23.208762   58730 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 01:05:23.208840   58730 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 01:05:23.209055   58730 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-754132 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 01:05:23.209148   58730 kubeadm.go:322] [bootstrap-token] Using token: j0j8ab.rja1mh5j9krst0k4
	I1101 01:05:23.210755   58730 out.go:204]   - Configuring RBAC rules ...
	I1101 01:05:23.210895   58730 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 01:05:23.211001   58730 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 01:05:23.211205   58730 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 01:05:23.211369   58730 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 01:05:23.211509   58730 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 01:05:23.211617   58730 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 01:05:23.211776   58730 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 01:05:23.211851   58730 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1101 01:05:23.211894   58730 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1101 01:05:23.211901   58730 kubeadm.go:322] 
	I1101 01:05:23.211985   58730 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1101 01:05:23.211992   58730 kubeadm.go:322] 
	I1101 01:05:23.212076   58730 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1101 01:05:23.212085   58730 kubeadm.go:322] 
	I1101 01:05:23.212128   58730 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1101 01:05:23.212205   58730 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 01:05:23.212256   58730 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 01:05:23.212263   58730 kubeadm.go:322] 
	I1101 01:05:23.212305   58730 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1101 01:05:23.212314   58730 kubeadm.go:322] 
	I1101 01:05:23.212352   58730 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 01:05:23.212359   58730 kubeadm.go:322] 
	I1101 01:05:23.212400   58730 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1101 01:05:23.212461   58730 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 01:05:23.212568   58730 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 01:05:23.212584   58730 kubeadm.go:322] 
	I1101 01:05:23.212699   58730 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 01:05:23.212787   58730 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1101 01:05:23.212797   58730 kubeadm.go:322] 
	I1101 01:05:23.212862   58730 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token j0j8ab.rja1mh5j9krst0k4 \
	I1101 01:05:23.212943   58730 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 \
	I1101 01:05:23.212962   58730 kubeadm.go:322] 	--control-plane 
	I1101 01:05:23.212968   58730 kubeadm.go:322] 
	I1101 01:05:23.213083   58730 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1101 01:05:23.213093   58730 kubeadm.go:322] 
	I1101 01:05:23.213202   58730 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token j0j8ab.rja1mh5j9krst0k4 \
	I1101 01:05:23.213346   58730 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 
	I1101 01:05:23.213366   58730 cni.go:84] Creating CNI manager for ""
	I1101 01:05:23.213375   58730 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:05:23.215058   58730 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:05:23.216515   58730 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:05:23.251532   58730 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:05:21.007674   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:23.505067   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:21.204745   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:23.206316   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:25.211036   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:22.507158   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:24.508157   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:26.508990   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:23.291112   58730 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 01:05:23.291192   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:23.291224   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9 minikube.k8s.io/name=embed-certs-754132 minikube.k8s.io/updated_at=2023_11_01T01_05_23_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:23.452410   58730 ops.go:34] apiserver oom_adj: -16
	I1101 01:05:23.635798   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:23.754993   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:24.350830   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:24.850468   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:25.350887   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:25.850719   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:26.350946   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:26.850869   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:27.350851   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:27.850856   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:25.507083   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:27.511273   59148 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:29.974545   59148 pod_ready.go:81] duration metric: took 4m0.000148043s waiting for pod "metrics-server-57f55c9bc5-6szl7" in "kube-system" namespace to be "Ready" ...
	E1101 01:05:29.974585   59148 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1101 01:05:29.974607   59148 pod_ready.go:38] duration metric: took 4m5.715718658s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:05:29.974652   59148 kubeadm.go:640] restartCluster took 4m26.139306333s
	W1101 01:05:29.974746   59148 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1101 01:05:29.974779   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1101 01:05:27.704338   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:30.205751   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:29.008649   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:31.009235   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:28.350920   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:28.850670   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:29.350172   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:29.850241   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:30.351225   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:30.851276   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:31.350289   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:31.850999   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:32.350874   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:32.850500   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:32.708147   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:35.205568   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:33.351023   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:33.851109   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:34.351257   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:34.850212   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:35.350277   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:35.850281   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:36.350770   58730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:36.456508   58730 kubeadm.go:1081] duration metric: took 13.165385995s to wait for elevateKubeSystemPrivileges.
	I1101 01:05:36.456550   58730 kubeadm.go:406] StartCluster complete in 5m14.31984828s
	I1101 01:05:36.456575   58730 settings.go:142] acquiring lock: {Name:mk7f269e64dfd8d176737f993e01f6e6badafbc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:05:36.456674   58730 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 01:05:36.458488   58730 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/kubeconfig: {Name:mk08da65b6c71084e1cfafb19800038e8c8303e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:05:36.458789   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 01:05:36.458936   58730 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1101 01:05:36.459029   58730 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-754132"
	I1101 01:05:36.459061   58730 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-754132"
	W1101 01:05:36.459076   58730 addons.go:240] addon storage-provisioner should already be in state true
	I1101 01:05:36.459086   58730 addons.go:69] Setting metrics-server=true in profile "embed-certs-754132"
	I1101 01:05:36.459124   58730 addons.go:231] Setting addon metrics-server=true in "embed-certs-754132"
	I1101 01:05:36.459134   58730 host.go:66] Checking if "embed-certs-754132" exists ...
	I1101 01:05:36.459060   58730 config.go:182] Loaded profile config "embed-certs-754132": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:05:36.459062   58730 addons.go:69] Setting default-storageclass=true in profile "embed-certs-754132"
	I1101 01:05:36.459219   58730 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-754132"
	W1101 01:05:36.459138   58730 addons.go:240] addon metrics-server should already be in state true
	I1101 01:05:36.459347   58730 host.go:66] Checking if "embed-certs-754132" exists ...
	I1101 01:05:36.459588   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:05:36.459633   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:05:36.459638   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:05:36.459674   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:05:36.459689   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:05:36.459713   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:05:36.477136   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40825
	I1101 01:05:36.477207   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37895
	I1101 01:05:36.477706   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46261
	I1101 01:05:36.477874   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:05:36.477889   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:05:36.478086   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:05:36.478388   58730 main.go:141] libmachine: Using API Version  1
	I1101 01:05:36.478405   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:05:36.478540   58730 main.go:141] libmachine: Using API Version  1
	I1101 01:05:36.478561   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:05:36.478601   58730 main.go:141] libmachine: Using API Version  1
	I1101 01:05:36.478622   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:05:36.478794   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:05:36.478990   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:05:36.479037   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:05:36.479219   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetState
	I1101 01:05:36.479379   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:05:36.479412   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:05:36.479587   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:05:36.479623   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:05:36.483272   58730 addons.go:231] Setting addon default-storageclass=true in "embed-certs-754132"
	W1101 01:05:36.483295   58730 addons.go:240] addon default-storageclass should already be in state true
	I1101 01:05:36.483318   58730 host.go:66] Checking if "embed-certs-754132" exists ...
	I1101 01:05:36.483665   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:05:36.483696   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:05:36.498137   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46727
	I1101 01:05:36.498148   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37157
	I1101 01:05:36.498530   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:05:36.499000   58730 main.go:141] libmachine: Using API Version  1
	I1101 01:05:36.499024   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:05:36.499329   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:05:36.499499   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetState
	I1101 01:05:36.501223   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:05:36.503752   58730 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:05:36.505580   58730 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:05:36.505600   58730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 01:05:36.505617   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:05:36.505756   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37761
	I1101 01:05:36.506307   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:05:36.506765   58730 main.go:141] libmachine: Using API Version  1
	I1101 01:05:36.506783   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:05:36.507257   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:05:36.507303   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:05:36.507766   58730 main.go:141] libmachine: Using API Version  1
	I1101 01:05:36.507786   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:05:36.507852   58730 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:05:36.507894   58730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:05:36.508136   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:05:36.508296   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetState
	I1101 01:05:36.509982   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:05:36.512303   58730 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1101 01:05:36.512065   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:05:36.513712   58730 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 01:05:36.513728   58730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 01:05:36.513749   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:05:36.512082   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:05:36.513819   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:05:36.513839   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:05:36.516632   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:05:36.516867   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:05:36.517052   58730 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa Username:docker}
	I1101 01:05:36.517489   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:05:36.518036   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:05:36.518058   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:05:36.518360   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:05:36.519431   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:05:36.519602   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:05:36.519742   58730 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa Username:docker}
	I1101 01:05:36.526881   58730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35481
	I1101 01:05:36.527462   58730 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:05:36.527889   58730 main.go:141] libmachine: Using API Version  1
	I1101 01:05:36.527902   58730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:05:36.528341   58730 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:05:36.528511   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetState
	I1101 01:05:36.530250   58730 main.go:141] libmachine: (embed-certs-754132) Calling .DriverName
	I1101 01:05:36.530539   58730 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 01:05:36.530557   58730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 01:05:36.530575   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHHostname
	I1101 01:05:36.533671   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:05:36.534068   58730 main.go:141] libmachine: (embed-certs-754132) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:2f:dd", ip: ""} in network mk-embed-certs-754132: {Iface:virbr3 ExpiryTime:2023-11-01 02:00:05 +0000 UTC Type:0 Mac:52:54:00:5e:2f:dd Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:embed-certs-754132 Clientid:01:52:54:00:5e:2f:dd}
	I1101 01:05:36.534093   58730 main.go:141] libmachine: (embed-certs-754132) DBG | domain embed-certs-754132 has defined IP address 192.168.61.83 and MAC address 52:54:00:5e:2f:dd in network mk-embed-certs-754132
	I1101 01:05:36.534368   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHPort
	I1101 01:05:36.534596   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHKeyPath
	I1101 01:05:36.534741   58730 main.go:141] libmachine: (embed-certs-754132) Calling .GetSSHUsername
	I1101 01:05:36.534821   58730 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/embed-certs-754132/id_rsa Username:docker}
	I1101 01:05:36.559098   58730 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-754132" context rescaled to 1 replicas
	I1101 01:05:36.559135   58730 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.83 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 01:05:36.561061   58730 out.go:177] * Verifying Kubernetes components...
	I1101 01:05:33.009726   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:35.507972   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:36.562382   58730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:05:36.684098   58730 node_ready.go:35] waiting up to 6m0s for node "embed-certs-754132" to be "Ready" ...
	I1101 01:05:36.684219   58730 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 01:05:36.689836   58730 node_ready.go:49] node "embed-certs-754132" has status "Ready":"True"
	I1101 01:05:36.689863   58730 node_ready.go:38] duration metric: took 5.731179ms waiting for node "embed-certs-754132" to be "Ready" ...
	I1101 01:05:36.689875   58730 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:05:36.707509   58730 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:05:36.743671   58730 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 01:05:36.743702   58730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1101 01:05:36.764886   58730 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:36.773895   58730 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 01:05:36.810064   58730 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 01:05:36.810095   58730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 01:05:36.888833   58730 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:05:36.888854   58730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 01:05:36.892836   58730 pod_ready.go:92] pod "etcd-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:05:36.892864   58730 pod_ready.go:81] duration metric: took 127.938482ms waiting for pod "etcd-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:36.892879   58730 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:36.968554   58730 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:05:36.978210   58730 pod_ready.go:92] pod "kube-apiserver-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:05:36.978239   58730 pod_ready.go:81] duration metric: took 85.351942ms waiting for pod "kube-apiserver-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:36.978254   58730 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:37.154956   58730 pod_ready.go:92] pod "kube-controller-manager-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:05:37.154983   58730 pod_ready.go:81] duration metric: took 176.720364ms waiting for pod "kube-controller-manager-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:37.154997   58730 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cwbfz" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:38.405267   58730 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.720993157s)
	I1101 01:05:38.405302   58730 start.go:926] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1101 01:05:38.840834   58730 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.133283925s)
	I1101 01:05:38.840891   58730 main.go:141] libmachine: Making call to close driver server
	I1101 01:05:38.840906   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Close
	I1101 01:05:38.840918   58730 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.066970508s)
	I1101 01:05:38.841048   58730 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.872463156s)
	I1101 01:05:38.841085   58730 main.go:141] libmachine: Making call to close driver server
	I1101 01:05:38.841098   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Close
	I1101 01:05:38.841320   58730 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:05:38.841370   58730 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:05:38.841373   58730 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:05:38.841328   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Closing plugin on server side
	I1101 01:05:38.841400   58730 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:05:38.841412   58730 main.go:141] libmachine: Making call to close driver server
	I1101 01:05:38.841426   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Close
	I1101 01:05:38.841390   58730 main.go:141] libmachine: Making call to close driver server
	I1101 01:05:38.841442   58730 main.go:141] libmachine: Making call to close driver server
	I1101 01:05:38.841454   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Close
	I1101 01:05:38.841457   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Close
	I1101 01:05:38.841354   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Closing plugin on server side
	I1101 01:05:38.844717   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Closing plugin on server side
	I1101 01:05:38.844730   58730 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:05:38.844723   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Closing plugin on server side
	I1101 01:05:38.844744   58730 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:05:38.844753   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Closing plugin on server side
	I1101 01:05:38.844757   58730 addons.go:467] Verifying addon metrics-server=true in "embed-certs-754132"
	I1101 01:05:38.844763   58730 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:05:38.844774   58730 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:05:38.844773   58730 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:05:38.844789   58730 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:05:38.844799   58730 main.go:141] libmachine: Making call to close driver server
	I1101 01:05:38.844808   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Close
	I1101 01:05:38.845059   58730 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:05:38.845077   58730 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:05:38.845092   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Closing plugin on server side
	I1101 01:05:38.890752   58730 main.go:141] libmachine: Making call to close driver server
	I1101 01:05:38.890785   58730 main.go:141] libmachine: (embed-certs-754132) Calling .Close
	I1101 01:05:38.891075   58730 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:05:38.891095   58730 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:05:38.891108   58730 main.go:141] libmachine: (embed-certs-754132) DBG | Closing plugin on server side
	I1101 01:05:38.892878   58730 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I1101 01:05:37.706877   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:39.707206   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:38.894405   58730 addons.go:502] enable addons completed in 2.435477984s: enabled=[metrics-server storage-provisioner default-storageclass]
	I1101 01:05:39.279100   58730 pod_ready.go:102] pod "kube-proxy-cwbfz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:40.775597   58730 pod_ready.go:92] pod "kube-proxy-cwbfz" in "kube-system" namespace has status "Ready":"True"
	I1101 01:05:40.775622   58730 pod_ready.go:81] duration metric: took 3.620618998s waiting for pod "kube-proxy-cwbfz" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:40.775644   58730 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:40.782773   58730 pod_ready.go:92] pod "kube-scheduler-embed-certs-754132" in "kube-system" namespace has status "Ready":"True"
	I1101 01:05:40.782796   58730 pod_ready.go:81] duration metric: took 7.145643ms waiting for pod "kube-scheduler-embed-certs-754132" in "kube-system" namespace to be "Ready" ...
	I1101 01:05:40.782806   58730 pod_ready.go:38] duration metric: took 4.092919772s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:05:40.782821   58730 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:05:40.782868   58730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:05:40.811977   58730 api_server.go:72] duration metric: took 4.252812827s to wait for apiserver process to appear ...
	I1101 01:05:40.812000   58730 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:05:40.812017   58730 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8443/healthz ...
	I1101 01:05:40.817524   58730 api_server.go:279] https://192.168.61.83:8443/healthz returned 200:
	ok
	I1101 01:05:40.819599   58730 api_server.go:141] control plane version: v1.28.3
	I1101 01:05:40.819625   58730 api_server.go:131] duration metric: took 7.617418ms to wait for apiserver health ...
	I1101 01:05:40.819636   58730 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:05:40.826677   58730 system_pods.go:59] 8 kube-system pods found
	I1101 01:05:40.826714   58730 system_pods.go:61] "coredns-5dd5756b68-6kqbc" [e03e6370-35d1-4438-8b18-d62b0a253ea6] Running
	I1101 01:05:40.826722   58730 system_pods.go:61] "etcd-embed-certs-754132" [2cd8789c-8ba8-47ea-82f2-e461cbc9d3b3] Running
	I1101 01:05:40.826729   58730 system_pods.go:61] "kube-apiserver-embed-certs-754132" [81bd13a3-37ea-4bf6-9eb9-e66318137a21] Running
	I1101 01:05:40.826735   58730 system_pods.go:61] "kube-controller-manager-embed-certs-754132" [6aa18435-1990-479b-b975-7ac1d794d967] Running
	I1101 01:05:40.826742   58730 system_pods.go:61] "kube-proxy-cwbfz" [b7f5ba1e-bd63-456b-94cc-0e2c121b7792] Running
	I1101 01:05:40.826748   58730 system_pods.go:61] "kube-scheduler-embed-certs-754132" [64203f31-7c41-42d0-9d6b-bc63e1b423cc] Running
	I1101 01:05:40.826758   58730 system_pods.go:61] "metrics-server-57f55c9bc5-499xs" [617aecda-f132-4358-9da9-bbc4fc625da0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:05:40.826773   58730 system_pods.go:61] "storage-provisioner" [7feb8931-83d0-4968-a295-a4202e8fc8c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 01:05:40.826786   58730 system_pods.go:74] duration metric: took 7.142747ms to wait for pod list to return data ...
	I1101 01:05:40.826799   58730 default_sa.go:34] waiting for default service account to be created ...
	I1101 01:05:40.831268   58730 default_sa.go:45] found service account: "default"
	I1101 01:05:40.831295   58730 default_sa.go:55] duration metric: took 4.485602ms for default service account to be created ...
	I1101 01:05:40.831309   58730 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 01:05:40.891306   58730 system_pods.go:86] 8 kube-system pods found
	I1101 01:05:40.891335   58730 system_pods.go:89] "coredns-5dd5756b68-6kqbc" [e03e6370-35d1-4438-8b18-d62b0a253ea6] Running
	I1101 01:05:40.891341   58730 system_pods.go:89] "etcd-embed-certs-754132" [2cd8789c-8ba8-47ea-82f2-e461cbc9d3b3] Running
	I1101 01:05:40.891346   58730 system_pods.go:89] "kube-apiserver-embed-certs-754132" [81bd13a3-37ea-4bf6-9eb9-e66318137a21] Running
	I1101 01:05:40.891350   58730 system_pods.go:89] "kube-controller-manager-embed-certs-754132" [6aa18435-1990-479b-b975-7ac1d794d967] Running
	I1101 01:05:40.891354   58730 system_pods.go:89] "kube-proxy-cwbfz" [b7f5ba1e-bd63-456b-94cc-0e2c121b7792] Running
	I1101 01:05:40.891358   58730 system_pods.go:89] "kube-scheduler-embed-certs-754132" [64203f31-7c41-42d0-9d6b-bc63e1b423cc] Running
	I1101 01:05:40.891366   58730 system_pods.go:89] "metrics-server-57f55c9bc5-499xs" [617aecda-f132-4358-9da9-bbc4fc625da0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:05:40.891373   58730 system_pods.go:89] "storage-provisioner" [7feb8931-83d0-4968-a295-a4202e8fc8c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 01:05:40.891381   58730 system_pods.go:126] duration metric: took 60.065984ms to wait for k8s-apps to be running ...
	I1101 01:05:40.891391   58730 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 01:05:40.891436   58730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:05:40.906845   58730 system_svc.go:56] duration metric: took 15.443235ms WaitForService to wait for kubelet.
	I1101 01:05:40.906875   58730 kubeadm.go:581] duration metric: took 4.347718478s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1101 01:05:40.906895   58730 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:05:41.089628   58730 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:05:41.089654   58730 node_conditions.go:123] node cpu capacity is 2
	I1101 01:05:41.089664   58730 node_conditions.go:105] duration metric: took 182.764311ms to run NodePressure ...
	I1101 01:05:41.089674   58730 start.go:228] waiting for startup goroutines ...
	I1101 01:05:41.089680   58730 start.go:233] waiting for cluster config update ...
	I1101 01:05:41.089693   58730 start.go:242] writing updated cluster config ...
	I1101 01:05:41.089950   58730 ssh_runner.go:195] Run: rm -f paused
	I1101 01:05:41.140594   58730 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1101 01:05:41.143142   58730 out.go:177] * Done! kubectl is now configured to use "embed-certs-754132" cluster and "default" namespace by default
	I1101 01:05:37.516552   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:40.009373   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:43.882201   59148 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.907397495s)
	I1101 01:05:43.882275   59148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:05:43.897793   59148 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:05:43.908350   59148 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:05:43.919013   59148 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:05:43.919066   59148 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1101 01:05:43.992534   59148 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1101 01:05:43.992653   59148 kubeadm.go:322] [preflight] Running pre-flight checks
	I1101 01:05:44.162750   59148 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 01:05:44.162906   59148 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 01:05:44.163052   59148 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 01:05:44.398016   59148 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 01:05:44.399998   59148 out.go:204]   - Generating certificates and keys ...
	I1101 01:05:44.400102   59148 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1101 01:05:44.400189   59148 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1101 01:05:44.400334   59148 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1101 01:05:44.400431   59148 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1101 01:05:44.400526   59148 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1101 01:05:44.400602   59148 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1101 01:05:44.400736   59148 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1101 01:05:44.400821   59148 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1101 01:05:44.401336   59148 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1101 01:05:44.401936   59148 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1101 01:05:44.402420   59148 kubeadm.go:322] [certs] Using the existing "sa" key
	I1101 01:05:44.402515   59148 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 01:05:44.470807   59148 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 01:05:44.642677   59148 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 01:05:44.768991   59148 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 01:05:45.052817   59148 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 01:05:45.053698   59148 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 01:05:45.056339   59148 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 01:05:42.204108   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:44.205679   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:42.508073   58823 pod_ready.go:102] pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:43.201762   58823 pod_ready.go:81] duration metric: took 4m0.000100455s waiting for pod "metrics-server-74d5856cc6-kljcd" in "kube-system" namespace to be "Ready" ...
	E1101 01:05:43.201795   58823 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1101 01:05:43.201816   58823 pod_ready.go:38] duration metric: took 4m1.199592624s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:05:43.201848   58823 kubeadm.go:640] restartCluster took 4m57.555406731s
	W1101 01:05:43.201899   58823 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1101 01:05:43.201920   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1101 01:05:45.058304   59148 out.go:204]   - Booting up control plane ...
	I1101 01:05:45.058434   59148 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 01:05:45.058565   59148 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 01:05:45.060937   59148 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 01:05:45.078776   59148 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 01:05:45.079692   59148 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 01:05:45.079771   59148 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1101 01:05:45.204880   59148 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 01:05:46.208575   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:48.705698   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:50.708163   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:48.240337   58823 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.038387523s)
	I1101 01:05:48.240417   58823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:05:48.257585   58823 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:05:48.266949   58823 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:05:48.277302   58823 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:05:48.277346   58823 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1101 01:05:48.514394   58823 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 01:05:54.708746   59148 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.503354 seconds
	I1101 01:05:54.708894   59148 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 01:05:54.726194   59148 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 01:05:55.266392   59148 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 01:05:55.266670   59148 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-639310 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 01:05:55.783906   59148 kubeadm.go:322] [bootstrap-token] Using token: ilpx6n.m6vs8mqxrjuf2w8f
	I1101 01:05:53.205312   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:55.206016   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:55.786231   59148 out.go:204]   - Configuring RBAC rules ...
	I1101 01:05:55.786370   59148 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 01:05:55.793682   59148 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 01:05:55.812319   59148 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 01:05:55.819324   59148 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 01:05:55.825785   59148 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 01:05:55.831793   59148 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 01:05:55.858443   59148 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 01:05:56.195472   59148 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1101 01:05:56.248405   59148 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1101 01:05:56.249655   59148 kubeadm.go:322] 
	I1101 01:05:56.249745   59148 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1101 01:05:56.249759   59148 kubeadm.go:322] 
	I1101 01:05:56.249852   59148 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1101 01:05:56.249869   59148 kubeadm.go:322] 
	I1101 01:05:56.249931   59148 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1101 01:05:56.249992   59148 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 01:05:56.250076   59148 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 01:05:56.250088   59148 kubeadm.go:322] 
	I1101 01:05:56.250163   59148 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1101 01:05:56.250172   59148 kubeadm.go:322] 
	I1101 01:05:56.250261   59148 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 01:05:56.250281   59148 kubeadm.go:322] 
	I1101 01:05:56.250344   59148 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1101 01:05:56.250436   59148 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 01:05:56.250560   59148 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 01:05:56.250574   59148 kubeadm.go:322] 
	I1101 01:05:56.250663   59148 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 01:05:56.250757   59148 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1101 01:05:56.250769   59148 kubeadm.go:322] 
	I1101 01:05:56.250881   59148 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token ilpx6n.m6vs8mqxrjuf2w8f \
	I1101 01:05:56.251011   59148 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 \
	I1101 01:05:56.251041   59148 kubeadm.go:322] 	--control-plane 
	I1101 01:05:56.251053   59148 kubeadm.go:322] 
	I1101 01:05:56.251150   59148 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1101 01:05:56.251162   59148 kubeadm.go:322] 
	I1101 01:05:56.251259   59148 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token ilpx6n.m6vs8mqxrjuf2w8f \
	I1101 01:05:56.251383   59148 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 
	I1101 01:05:56.251922   59148 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 01:05:56.251982   59148 cni.go:84] Creating CNI manager for ""
	I1101 01:05:56.252008   59148 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:05:56.254247   59148 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:05:56.256068   59148 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:05:56.281994   59148 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:05:56.324660   59148 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 01:05:56.324796   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:56.324863   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9 minikube.k8s.io/name=default-k8s-diff-port-639310 minikube.k8s.io/updated_at=2023_11_01T01_05_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:56.739064   59148 ops.go:34] apiserver oom_adj: -16
	I1101 01:05:56.739245   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:56.834852   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:57.432044   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:57.931920   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:58.432414   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:58.932871   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:59.432755   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:59.932515   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:05:57.704234   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:05:59.705516   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:01.231970   58823 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1101 01:06:01.232062   58823 kubeadm.go:322] [preflight] Running pre-flight checks
	I1101 01:06:01.232156   58823 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 01:06:01.232289   58823 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 01:06:01.232419   58823 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 01:06:01.232595   58823 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 01:06:01.232714   58823 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 01:06:01.232790   58823 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1101 01:06:01.232890   58823 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 01:06:01.235429   58823 out.go:204]   - Generating certificates and keys ...
	I1101 01:06:01.235533   58823 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1101 01:06:01.235606   58823 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1101 01:06:01.235675   58823 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1101 01:06:01.235782   58823 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1101 01:06:01.235889   58823 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1101 01:06:01.235973   58823 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1101 01:06:01.236065   58823 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1101 01:06:01.236153   58823 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1101 01:06:01.236263   58823 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1101 01:06:01.236383   58823 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1101 01:06:01.236447   58823 kubeadm.go:322] [certs] Using the existing "sa" key
	I1101 01:06:01.236528   58823 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 01:06:01.236607   58823 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 01:06:01.236728   58823 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 01:06:01.236811   58823 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 01:06:01.236877   58823 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 01:06:01.236955   58823 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 01:06:01.238699   58823 out.go:204]   - Booting up control plane ...
	I1101 01:06:01.238808   58823 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 01:06:01.238904   58823 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 01:06:01.238990   58823 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 01:06:01.239092   58823 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 01:06:01.239289   58823 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 01:06:01.239387   58823 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.004023 seconds
	I1101 01:06:01.239528   58823 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 01:06:01.239741   58823 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 01:06:01.239796   58823 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 01:06:01.239971   58823 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-330042 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1101 01:06:01.240056   58823 kubeadm.go:322] [bootstrap-token] Using token: lseik6.3ozwuciianl7vrri
	I1101 01:06:01.241690   58823 out.go:204]   - Configuring RBAC rules ...
	I1101 01:06:01.241825   58823 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 01:06:01.242015   58823 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 01:06:01.242170   58823 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 01:06:01.242265   58823 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 01:06:01.242380   58823 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 01:06:01.242448   58823 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1101 01:06:01.242517   58823 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1101 01:06:01.242549   58823 kubeadm.go:322] 
	I1101 01:06:01.242631   58823 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1101 01:06:01.242646   58823 kubeadm.go:322] 
	I1101 01:06:01.242753   58823 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1101 01:06:01.242764   58823 kubeadm.go:322] 
	I1101 01:06:01.242801   58823 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1101 01:06:01.242883   58823 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 01:06:01.242956   58823 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 01:06:01.242965   58823 kubeadm.go:322] 
	I1101 01:06:01.243041   58823 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1101 01:06:01.243152   58823 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 01:06:01.243249   58823 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 01:06:01.243261   58823 kubeadm.go:322] 
	I1101 01:06:01.243357   58823 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1101 01:06:01.243421   58823 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1101 01:06:01.243425   58823 kubeadm.go:322] 
	I1101 01:06:01.243490   58823 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token lseik6.3ozwuciianl7vrri \
	I1101 01:06:01.243597   58823 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 \
	I1101 01:06:01.243619   58823 kubeadm.go:322]     --control-plane 	  
	I1101 01:06:01.243623   58823 kubeadm.go:322] 
	I1101 01:06:01.243697   58823 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1101 01:06:01.243702   58823 kubeadm.go:322] 
	I1101 01:06:01.243773   58823 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token lseik6.3ozwuciianl7vrri \
	I1101 01:06:01.243923   58823 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 
	I1101 01:06:01.243967   58823 cni.go:84] Creating CNI manager for ""
	I1101 01:06:01.243979   58823 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:06:01.246766   58823 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:06:01.248244   58823 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:06:01.274713   58823 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:06:01.299087   58823 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 01:06:01.299184   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:01.299241   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9 minikube.k8s.io/name=old-k8s-version-330042 minikube.k8s.io/updated_at=2023_11_01T01_06_01_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:01.350480   58823 ops.go:34] apiserver oom_adj: -16
	I1101 01:06:01.668212   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:01.795923   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:02.398955   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:00.432038   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:00.932486   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:01.431924   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:01.932050   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:02.432828   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:02.932070   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:03.432833   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:03.931826   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:04.432522   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:04.932660   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:01.705717   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:04.205431   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:02.899285   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:03.398507   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:03.898445   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:04.399301   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:04.898647   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:05.399211   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:05.899099   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:06.398426   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:06.898703   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:07.399266   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:05.431880   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:05.932001   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:06.432804   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:06.932744   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:07.432405   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:07.932531   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:08.432007   59148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:08.560694   59148 kubeadm.go:1081] duration metric: took 12.235943971s to wait for elevateKubeSystemPrivileges.
	I1101 01:06:08.560733   59148 kubeadm.go:406] StartCluster complete in 5m4.77698433s
	I1101 01:06:08.560756   59148 settings.go:142] acquiring lock: {Name:mk7f269e64dfd8d176737f993e01f6e6badafbc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:06:08.560862   59148 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 01:06:08.563346   59148 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/kubeconfig: {Name:mk08da65b6c71084e1cfafb19800038e8c8303e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:06:08.563655   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 01:06:08.563793   59148 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1101 01:06:08.563857   59148 config.go:182] Loaded profile config "default-k8s-diff-port-639310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:06:08.563874   59148 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-639310"
	I1101 01:06:08.563892   59148 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-639310"
	I1101 01:06:08.563905   59148 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-639310"
	I1101 01:06:08.563917   59148 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-639310"
	I1101 01:06:08.563950   59148 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-639310"
	I1101 01:06:08.563899   59148 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-639310"
	W1101 01:06:08.563962   59148 addons.go:240] addon metrics-server should already be in state true
	W1101 01:06:08.563990   59148 addons.go:240] addon storage-provisioner should already be in state true
	I1101 01:06:08.564025   59148 host.go:66] Checking if "default-k8s-diff-port-639310" exists ...
	I1101 01:06:08.564064   59148 host.go:66] Checking if "default-k8s-diff-port-639310" exists ...
	I1101 01:06:08.564369   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:08.564404   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:08.564421   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:08.564453   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:08.564455   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:08.564488   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:08.581714   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37509
	I1101 01:06:08.582180   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:08.583081   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35137
	I1101 01:06:08.583312   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:06:08.583332   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:08.583553   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41541
	I1101 01:06:08.583702   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:08.583714   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:08.583891   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:08.584174   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:06:08.584200   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:08.584272   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:08.584302   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:08.584638   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:06:08.584687   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:08.584737   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:08.584993   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:08.585152   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetState
	I1101 01:06:08.585215   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:08.585256   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:08.588703   59148 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-639310"
	W1101 01:06:08.588728   59148 addons.go:240] addon default-storageclass should already be in state true
	I1101 01:06:08.588754   59148 host.go:66] Checking if "default-k8s-diff-port-639310" exists ...
	I1101 01:06:08.589158   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:08.589193   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:08.600826   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40787
	I1101 01:06:08.601314   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:08.601952   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:06:08.601976   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:08.602335   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:08.602560   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetState
	I1101 01:06:08.603276   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35887
	I1101 01:06:08.603415   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36765
	I1101 01:06:08.603803   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:08.604098   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:08.604276   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:06:08.604290   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:08.604490   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:06:08.604506   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:08.604573   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:06:08.604778   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:08.606338   59148 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:06:08.605001   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:08.605380   59148 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:08.607632   59148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:08.607705   59148 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:06:08.607717   59148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 01:06:08.607731   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:06:08.607995   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetState
	I1101 01:06:08.610424   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:06:08.612025   59148 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1101 01:06:08.613346   59148 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 01:06:08.613365   59148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 01:06:08.613386   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:06:08.611304   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:06:08.611864   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:06:08.613461   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:06:08.613508   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:06:08.613650   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:06:08.613769   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:06:08.613869   59148 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa Username:docker}
	I1101 01:06:08.618717   59148 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-639310" context rescaled to 1 replicas
	I1101 01:06:08.618755   59148 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.97 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 01:06:08.620291   59148 out.go:177] * Verifying Kubernetes components...
	I1101 01:06:08.618896   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:06:08.620048   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:06:08.621662   59148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:06:08.621747   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:06:08.621777   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:06:08.622129   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:06:08.622359   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:06:08.622526   59148 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa Username:docker}
	I1101 01:06:08.629241   59148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42169
	I1101 01:06:08.629773   59148 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:08.630164   59148 main.go:141] libmachine: Using API Version  1
	I1101 01:06:08.630181   59148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:08.630428   59148 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:08.630558   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetState
	I1101 01:06:08.631892   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .DriverName
	I1101 01:06:08.632176   59148 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 01:06:08.632197   59148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 01:06:08.632216   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHHostname
	I1101 01:06:08.634872   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:06:08.635211   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:e0:44", ip: ""} in network mk-default-k8s-diff-port-639310: {Iface:virbr4 ExpiryTime:2023-11-01 02:00:48 +0000 UTC Type:0 Mac:52:54:00:83:e0:44 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:default-k8s-diff-port-639310 Clientid:01:52:54:00:83:e0:44}
	I1101 01:06:08.635241   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | domain default-k8s-diff-port-639310 has defined IP address 192.168.72.97 and MAC address 52:54:00:83:e0:44 in network mk-default-k8s-diff-port-639310
	I1101 01:06:08.635375   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHPort
	I1101 01:06:08.635576   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHKeyPath
	I1101 01:06:08.635713   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .GetSSHUsername
	I1101 01:06:08.635839   59148 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/default-k8s-diff-port-639310/id_rsa Username:docker}
	I1101 01:06:08.984005   59148 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 01:06:08.984032   59148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1101 01:06:08.991838   59148 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-639310" to be "Ready" ...
	I1101 01:06:08.991921   59148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 01:06:09.011096   59148 node_ready.go:49] node "default-k8s-diff-port-639310" has status "Ready":"True"
	I1101 01:06:09.011124   59148 node_ready.go:38] duration metric: took 19.250763ms waiting for node "default-k8s-diff-port-639310" to be "Ready" ...
	I1101 01:06:09.011136   59148 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:06:09.043526   59148 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:09.071032   59148 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 01:06:09.071065   59148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 01:06:09.089683   59148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 01:06:09.090332   59148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:06:09.139676   59148 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:06:09.139702   59148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 01:06:09.219436   59148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:06:06.705499   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:09.207584   58676 pod_ready.go:102] pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:09.922465   58676 pod_ready.go:81] duration metric: took 4m0.000913678s waiting for pod "metrics-server-57f55c9bc5-49wtw" in "kube-system" namespace to be "Ready" ...
	E1101 01:06:09.922511   58676 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1101 01:06:09.922529   58676 pod_ready.go:38] duration metric: took 4m11.570999497s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:06:09.922566   58676 kubeadm.go:640] restartCluster took 4m30.866358786s
	W1101 01:06:09.922644   58676 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1101 01:06:09.922688   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1101 01:06:11.075881   59148 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.083916099s)
	I1101 01:06:11.075915   59148 start.go:926] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1101 01:06:11.075946   59148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.986221728s)
	I1101 01:06:11.075997   59148 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:11.076012   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Close
	I1101 01:06:11.076348   59148 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:11.076367   59148 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:11.076377   59148 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:11.076386   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Close
	I1101 01:06:11.076620   59148 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:11.076639   59148 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:11.119713   59148 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:11.119741   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Close
	I1101 01:06:11.120145   59148 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:11.120170   59148 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:11.120145   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | Closing plugin on server side
	I1101 01:06:11.172242   59148 pod_ready.go:102] pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:11.954880   59148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.864508967s)
	I1101 01:06:11.954945   59148 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:11.954960   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Close
	I1101 01:06:11.955014   59148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.735537793s)
	I1101 01:06:11.955074   59148 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:11.955088   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Close
	I1101 01:06:11.955379   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | Closing plugin on server side
	I1101 01:06:11.955411   59148 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:11.955418   59148 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:11.955429   59148 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:11.955438   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Close
	I1101 01:06:11.957487   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) DBG | Closing plugin on server side
	I1101 01:06:11.957532   59148 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:11.957549   59148 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:11.957537   59148 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:11.957612   59148 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:11.957566   59148 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-639310"
	I1101 01:06:11.957643   59148 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:11.957672   59148 main.go:141] libmachine: (default-k8s-diff-port-639310) Calling .Close
	I1101 01:06:11.958036   59148 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:11.958063   59148 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:11.960489   59148 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I1101 01:06:07.899402   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:08.398731   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:08.898547   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:09.399015   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:09.898437   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:10.399024   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:10.899108   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:11.398482   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:11.898943   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:12.399022   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:11.962129   59148 addons.go:502] enable addons completed in 3.39833009s: enabled=[default-storageclass metrics-server storage-provisioner]
	I1101 01:06:13.684297   59148 pod_ready.go:102] pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:12.899212   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:13.398415   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:13.898444   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:14.398630   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:14.898427   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:15.399212   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:15.898869   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:16.399289   58823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:16.588122   58823 kubeadm.go:1081] duration metric: took 15.28901357s to wait for elevateKubeSystemPrivileges.
	I1101 01:06:16.588166   58823 kubeadm.go:406] StartCluster complete in 5m31.002121514s
	I1101 01:06:16.588190   58823 settings.go:142] acquiring lock: {Name:mk7f269e64dfd8d176737f993e01f6e6badafbc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:06:16.588290   58823 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 01:06:16.590925   58823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/kubeconfig: {Name:mk08da65b6c71084e1cfafb19800038e8c8303e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:06:16.591235   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 01:06:16.591339   58823 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1101 01:06:16.591416   58823 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-330042"
	I1101 01:06:16.591436   58823 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-330042"
	W1101 01:06:16.591444   58823 addons.go:240] addon storage-provisioner should already be in state true
	I1101 01:06:16.591477   58823 config.go:182] Loaded profile config "old-k8s-version-330042": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1101 01:06:16.591517   58823 host.go:66] Checking if "old-k8s-version-330042" exists ...
	I1101 01:06:16.591525   58823 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-330042"
	I1101 01:06:16.591541   58823 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-330042"
	I1101 01:06:16.591923   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:16.591924   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:16.591962   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:16.591980   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:16.592045   58823 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-330042"
	I1101 01:06:16.592064   58823 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-330042"
	W1101 01:06:16.592071   58823 addons.go:240] addon metrics-server should already be in state true
	I1101 01:06:16.592104   58823 host.go:66] Checking if "old-k8s-version-330042" exists ...
	I1101 01:06:16.592424   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:16.592468   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:16.610602   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35459
	I1101 01:06:16.611188   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:16.611722   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:06:16.611752   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:16.611893   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35425
	I1101 01:06:16.612233   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:16.612315   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:16.612802   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:16.612841   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:16.613196   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:06:16.613215   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:16.613550   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:16.613571   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39319
	I1101 01:06:16.613949   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:16.614126   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:16.614159   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:16.614425   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:06:16.614438   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:16.614811   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:16.614997   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetState
	I1101 01:06:16.617747   58823 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-330042"
	W1101 01:06:16.617763   58823 addons.go:240] addon default-storageclass should already be in state true
	I1101 01:06:16.617783   58823 host.go:66] Checking if "old-k8s-version-330042" exists ...
	I1101 01:06:16.618021   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:16.618044   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:16.633877   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37903
	I1101 01:06:16.634227   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34049
	I1101 01:06:16.634436   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:16.635052   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:16.635225   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:06:16.635251   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:16.635588   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:06:16.635603   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:16.635656   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:16.636032   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:16.636092   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetState
	I1101 01:06:16.636310   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetState
	I1101 01:06:16.637897   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:06:16.640069   58823 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:06:16.638479   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:06:16.640887   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35501
	I1101 01:06:16.641511   58823 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:06:16.641523   58823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 01:06:16.641540   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:06:16.642477   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:16.643099   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:06:16.643115   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:16.643826   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:16.644397   58823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:16.644432   58823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:16.644515   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:06:16.644534   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:06:16.644549   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:06:16.644743   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:06:16.644908   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:06:16.645006   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:06:16.645102   58823 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa Username:docker}
	I1101 01:06:16.648901   58823 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1101 01:06:16.650287   58823 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 01:06:16.650299   58823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 01:06:16.650316   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:06:16.654323   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:06:16.654694   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:06:16.654720   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:06:16.655020   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:06:16.655268   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:06:16.655450   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:06:16.655600   58823 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa Username:docker}
	I1101 01:06:16.663888   58823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32991
	I1101 01:06:16.664490   58823 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:16.665023   58823 main.go:141] libmachine: Using API Version  1
	I1101 01:06:16.665049   58823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:16.665533   58823 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:16.665720   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetState
	I1101 01:06:16.667516   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .DriverName
	I1101 01:06:16.667817   58823 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 01:06:16.667837   58823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 01:06:16.667856   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHHostname
	I1101 01:06:16.670789   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:06:16.671306   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:40:80", ip: ""} in network mk-old-k8s-version-330042: {Iface:virbr1 ExpiryTime:2023-11-01 02:00:27 +0000 UTC Type:0 Mac:52:54:00:a2:40:80 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-330042 Clientid:01:52:54:00:a2:40:80}
	I1101 01:06:16.671332   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | domain old-k8s-version-330042 has defined IP address 192.168.39.90 and MAC address 52:54:00:a2:40:80 in network mk-old-k8s-version-330042
	I1101 01:06:16.671519   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHPort
	I1101 01:06:16.671688   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHKeyPath
	I1101 01:06:16.671811   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .GetSSHUsername
	I1101 01:06:16.671974   58823 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/old-k8s-version-330042/id_rsa Username:docker}
	I1101 01:06:16.738145   58823 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-330042" context rescaled to 1 replicas
	I1101 01:06:16.738193   58823 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 01:06:16.740269   58823 out.go:177] * Verifying Kubernetes components...
	I1101 01:06:16.741889   58823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:06:16.827316   58823 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 01:06:16.827347   58823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1101 01:06:16.846888   58823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:06:16.868760   58823 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-330042" to be "Ready" ...
	I1101 01:06:16.868848   58823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 01:06:16.885920   58823 node_ready.go:49] node "old-k8s-version-330042" has status "Ready":"True"
	I1101 01:06:16.885962   58823 node_ready.go:38] duration metric: took 17.171382ms waiting for node "old-k8s-version-330042" to be "Ready" ...
	I1101 01:06:16.885975   58823 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:06:16.907070   58823 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-v2xlz" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:16.929166   58823 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 01:06:16.929190   58823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 01:06:16.946209   58823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 01:06:17.010599   58823 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:06:17.010628   58823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 01:06:17.132054   58823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:06:17.868039   58823 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1101 01:06:17.868039   58823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.021104248s)
	I1101 01:06:17.868120   58823 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:17.868126   58823 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:17.868140   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Close
	I1101 01:06:17.868142   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Close
	I1101 01:06:17.870315   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Closing plugin on server side
	I1101 01:06:17.870338   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Closing plugin on server side
	I1101 01:06:17.870352   58823 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:17.870364   58823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:17.870378   58823 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:17.870400   58823 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:17.870429   58823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:17.870439   58823 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:17.870448   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Close
	I1101 01:06:17.870470   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Close
	I1101 01:06:17.870865   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Closing plugin on server side
	I1101 01:06:17.870866   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Closing plugin on server side
	I1101 01:06:17.870876   58823 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:17.870890   58823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:17.870899   58823 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:17.870915   58823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:17.920542   58823 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:17.920570   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Close
	I1101 01:06:17.920923   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Closing plugin on server side
	I1101 01:06:17.920969   58823 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:17.920980   58823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:18.189030   58823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.056928538s)
	I1101 01:06:18.189096   58823 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:18.189109   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Close
	I1101 01:06:18.189446   58823 main.go:141] libmachine: (old-k8s-version-330042) DBG | Closing plugin on server side
	I1101 01:06:18.189464   58823 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:18.189476   58823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:18.189486   58823 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:18.189506   58823 main.go:141] libmachine: (old-k8s-version-330042) Calling .Close
	I1101 01:06:18.189735   58823 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:18.189752   58823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:18.189760   58823 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-330042"
	I1101 01:06:18.192103   58823 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1101 01:06:16.156689   59148 pod_ready.go:102] pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:18.158318   59148 pod_ready.go:102] pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:18.194035   58823 addons.go:502] enable addons completed in 1.602699312s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1101 01:06:18.978162   58823 pod_ready.go:102] pod "coredns-5644d7b6d9-v2xlz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:21.456448   58823 pod_ready.go:102] pod "coredns-5644d7b6d9-v2xlz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:20.657398   59148 pod_ready.go:102] pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:22.156680   59148 pod_ready.go:97] pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.72.97 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-11-01 01:06:08 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerSt
ateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-11-01 01:06:11 +0000 UTC,FinishedAt:2023-11-01 01:06:21 +0000 UTC,ContainerID:cri-o://1ecc4b16207e32548d5d59a4bb7a01519d7e5eaf75b05171abd6c8c635656811,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://1ecc4b16207e32548d5d59a4bb7a01519d7e5eaf75b05171abd6c8c635656811 Started:0xc002af16c0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1101 01:06:22.156709   59148 pod_ready.go:81] duration metric: took 13.113156669s waiting for pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace to be "Ready" ...
	E1101 01:06:22.156718   59148 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-q7r54" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 01:06:08 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.72.97 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-11-01 01:06:08 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Runnin
g:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-11-01 01:06:11 +0000 UTC,FinishedAt:2023-11-01 01:06:21 +0000 UTC,ContainerID:cri-o://1ecc4b16207e32548d5d59a4bb7a01519d7e5eaf75b05171abd6c8c635656811,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://1ecc4b16207e32548d5d59a4bb7a01519d7e5eaf75b05171abd6c8c635656811 Started:0xc002af16c0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1101 01:06:22.156726   59148 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rgzt8" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.163387   59148 pod_ready.go:92] pod "coredns-5dd5756b68-rgzt8" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:22.163410   59148 pod_ready.go:81] duration metric: took 6.677078ms waiting for pod "coredns-5dd5756b68-rgzt8" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.163423   59148 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.168499   59148 pod_ready.go:92] pod "etcd-default-k8s-diff-port-639310" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:22.168519   59148 pod_ready.go:81] duration metric: took 5.088683ms waiting for pod "etcd-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.168528   59148 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.174117   59148 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-639310" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:22.174143   59148 pod_ready.go:81] duration metric: took 5.607251ms waiting for pod "kube-apiserver-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.174157   59148 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.179321   59148 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-639310" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:22.179344   59148 pod_ready.go:81] duration metric: took 5.178241ms waiting for pod "kube-controller-manager-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.179356   59148 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kzgzn" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.554016   59148 pod_ready.go:92] pod "kube-proxy-kzgzn" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:22.554047   59148 pod_ready.go:81] duration metric: took 374.683914ms waiting for pod "kube-proxy-kzgzn" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.554061   59148 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.954192   59148 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-639310" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:22.954216   59148 pod_ready.go:81] duration metric: took 400.146517ms waiting for pod "kube-scheduler-default-k8s-diff-port-639310" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:22.954226   59148 pod_ready.go:38] duration metric: took 13.943077925s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:06:22.954243   59148 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:06:22.954294   59148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:06:22.970594   59148 api_server.go:72] duration metric: took 14.351804953s to wait for apiserver process to appear ...
	I1101 01:06:22.970621   59148 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:06:22.970638   59148 api_server.go:253] Checking apiserver healthz at https://192.168.72.97:8444/healthz ...
	I1101 01:06:22.976061   59148 api_server.go:279] https://192.168.72.97:8444/healthz returned 200:
	ok
	I1101 01:06:22.977368   59148 api_server.go:141] control plane version: v1.28.3
	I1101 01:06:22.977390   59148 api_server.go:131] duration metric: took 6.761145ms to wait for apiserver health ...
	I1101 01:06:22.977398   59148 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:06:23.156987   59148 system_pods.go:59] 8 kube-system pods found
	I1101 01:06:23.157014   59148 system_pods.go:61] "coredns-5dd5756b68-rgzt8" [6d136c6a-e0b2-44c3-a17b-85649d6ff7b7] Running
	I1101 01:06:23.157018   59148 system_pods.go:61] "etcd-default-k8s-diff-port-639310" [9cc2eba7-c72f-4a6f-9c55-8cce5586b574] Running
	I1101 01:06:23.157024   59148 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-639310" [e2b16d1e-af9f-452e-8243-5267f781ab19] Running
	I1101 01:06:23.157028   59148 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-639310" [9173e21f-a13f-4234-94a1-1976881ee23d] Running
	I1101 01:06:23.157034   59148 system_pods.go:61] "kube-proxy-kzgzn" [32d59980-f28a-482c-9aa8-8502915417f0] Running
	I1101 01:06:23.157038   59148 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-639310" [449df462-911a-4afa-8ca5-f9fccce9ecac] Running
	I1101 01:06:23.157046   59148 system_pods.go:61] "metrics-server-57f55c9bc5-65ph4" [4683706e-65f6-4845-a5ad-60da8cd20d8e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:23.157053   59148 system_pods.go:61] "storage-provisioner" [eaba9583-e564-4804-9cd3-2b4de36c85da] Running
	I1101 01:06:23.157060   59148 system_pods.go:74] duration metric: took 179.656649ms to wait for pod list to return data ...
	I1101 01:06:23.157067   59148 default_sa.go:34] waiting for default service account to be created ...
	I1101 01:06:23.352990   59148 default_sa.go:45] found service account: "default"
	I1101 01:06:23.353024   59148 default_sa.go:55] duration metric: took 195.950242ms for default service account to be created ...
	I1101 01:06:23.353034   59148 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 01:06:23.557472   59148 system_pods.go:86] 8 kube-system pods found
	I1101 01:06:23.557498   59148 system_pods.go:89] "coredns-5dd5756b68-rgzt8" [6d136c6a-e0b2-44c3-a17b-85649d6ff7b7] Running
	I1101 01:06:23.557505   59148 system_pods.go:89] "etcd-default-k8s-diff-port-639310" [9cc2eba7-c72f-4a6f-9c55-8cce5586b574] Running
	I1101 01:06:23.557512   59148 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-639310" [e2b16d1e-af9f-452e-8243-5267f781ab19] Running
	I1101 01:06:23.557518   59148 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-639310" [9173e21f-a13f-4234-94a1-1976881ee23d] Running
	I1101 01:06:23.557524   59148 system_pods.go:89] "kube-proxy-kzgzn" [32d59980-f28a-482c-9aa8-8502915417f0] Running
	I1101 01:06:23.557531   59148 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-639310" [449df462-911a-4afa-8ca5-f9fccce9ecac] Running
	I1101 01:06:23.557541   59148 system_pods.go:89] "metrics-server-57f55c9bc5-65ph4" [4683706e-65f6-4845-a5ad-60da8cd20d8e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:23.557554   59148 system_pods.go:89] "storage-provisioner" [eaba9583-e564-4804-9cd3-2b4de36c85da] Running
	I1101 01:06:23.557561   59148 system_pods.go:126] duration metric: took 204.522772ms to wait for k8s-apps to be running ...
	I1101 01:06:23.557571   59148 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 01:06:23.557614   59148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:06:23.572950   59148 system_svc.go:56] duration metric: took 15.367105ms WaitForService to wait for kubelet.
	I1101 01:06:23.572979   59148 kubeadm.go:581] duration metric: took 14.954198383s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1101 01:06:23.572995   59148 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:06:23.754816   59148 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:06:23.754852   59148 node_conditions.go:123] node cpu capacity is 2
	I1101 01:06:23.754865   59148 node_conditions.go:105] duration metric: took 181.864765ms to run NodePressure ...
	I1101 01:06:23.754879   59148 start.go:228] waiting for startup goroutines ...
	I1101 01:06:23.754887   59148 start.go:233] waiting for cluster config update ...
	I1101 01:06:23.754902   59148 start.go:242] writing updated cluster config ...
	I1101 01:06:23.755221   59148 ssh_runner.go:195] Run: rm -f paused
	I1101 01:06:23.805298   59148 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1101 01:06:23.807226   59148 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-639310" cluster and "default" namespace by default
	I1101 01:06:24.353352   58676 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.430634921s)
	I1101 01:06:24.353418   58676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:06:24.367115   58676 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:06:24.376272   58676 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:06:24.385067   58676 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:06:24.385105   58676 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1101 01:06:24.436586   58676 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1101 01:06:24.436698   58676 kubeadm.go:322] [preflight] Running pre-flight checks
	I1101 01:06:24.592267   58676 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 01:06:24.592409   58676 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 01:06:24.592529   58676 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 01:06:24.834834   58676 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 01:06:24.836680   58676 out.go:204]   - Generating certificates and keys ...
	I1101 01:06:24.836825   58676 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1101 01:06:24.836918   58676 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1101 01:06:24.837052   58676 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1101 01:06:24.837150   58676 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1101 01:06:24.837378   58676 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1101 01:06:24.838501   58676 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1101 01:06:24.838970   58676 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1101 01:06:24.839488   58676 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1101 01:06:24.840058   58676 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1101 01:06:24.840454   58676 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1101 01:06:24.840925   58676 kubeadm.go:322] [certs] Using the existing "sa" key
	I1101 01:06:24.841017   58676 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 01:06:25.117460   58676 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 01:06:25.218894   58676 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 01:06:25.319416   58676 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 01:06:25.555023   58676 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 01:06:25.555490   58676 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 01:06:25.558041   58676 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 01:06:25.559946   58676 out.go:204]   - Booting up control plane ...
	I1101 01:06:25.560090   58676 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 01:06:25.560212   58676 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 01:06:25.560321   58676 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 01:06:25.577307   58676 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 01:06:25.580427   58676 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 01:06:25.580508   58676 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1101 01:06:25.710362   58676 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 01:06:23.963710   58823 pod_ready.go:102] pod "coredns-5644d7b6d9-v2xlz" in "kube-system" namespace has status "Ready":"False"
	I1101 01:06:26.455851   58823 pod_ready.go:92] pod "coredns-5644d7b6d9-v2xlz" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:26.455880   58823 pod_ready.go:81] duration metric: took 9.548782268s waiting for pod "coredns-5644d7b6d9-v2xlz" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:26.455889   58823 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hkl2m" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:26.461243   58823 pod_ready.go:92] pod "kube-proxy-hkl2m" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:26.461277   58823 pod_ready.go:81] duration metric: took 5.380815ms waiting for pod "kube-proxy-hkl2m" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:26.461289   58823 pod_ready.go:38] duration metric: took 9.575303239s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:06:26.461314   58823 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:06:26.461372   58823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:06:26.476212   58823 api_server.go:72] duration metric: took 9.737981323s to wait for apiserver process to appear ...
	I1101 01:06:26.476245   58823 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:06:26.476268   58823 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I1101 01:06:26.483060   58823 api_server.go:279] https://192.168.39.90:8443/healthz returned 200:
	ok
	I1101 01:06:26.484299   58823 api_server.go:141] control plane version: v1.16.0
	I1101 01:06:26.484328   58823 api_server.go:131] duration metric: took 8.074303ms to wait for apiserver health ...
	I1101 01:06:26.484342   58823 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:06:26.488710   58823 system_pods.go:59] 4 kube-system pods found
	I1101 01:06:26.488745   58823 system_pods.go:61] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:26.488753   58823 system_pods.go:61] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:26.488766   58823 system_pods.go:61] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:26.488775   58823 system_pods.go:61] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:26.488787   58823 system_pods.go:74] duration metric: took 4.438458ms to wait for pod list to return data ...
	I1101 01:06:26.488797   58823 default_sa.go:34] waiting for default service account to be created ...
	I1101 01:06:26.492513   58823 default_sa.go:45] found service account: "default"
	I1101 01:06:26.492543   58823 default_sa.go:55] duration metric: took 3.739583ms for default service account to be created ...
	I1101 01:06:26.492553   58823 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 01:06:26.496897   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:26.496924   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:26.496929   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:26.496936   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:26.496942   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:26.496956   58823 retry.go:31] will retry after 215.348005ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:26.718021   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:26.718055   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:26.718064   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:26.718080   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:26.718086   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:26.718103   58823 retry.go:31] will retry after 357.067185ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:27.080480   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:27.080519   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:27.080528   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:27.080539   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:27.080548   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:27.080572   58823 retry.go:31] will retry after 441.083478ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:27.528922   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:27.528955   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:27.528964   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:27.528975   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:27.528984   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:27.529008   58823 retry.go:31] will retry after 595.152055ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:28.129735   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:28.129760   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:28.129765   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:28.129772   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:28.129778   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:28.129794   58823 retry.go:31] will retry after 591.454083ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:28.726058   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:28.726089   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:28.726097   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:28.726108   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:28.726118   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:28.726142   58823 retry.go:31] will retry after 682.338416ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:29.414282   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:29.414311   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:29.414321   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:29.414330   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:29.414338   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:29.414356   58823 retry.go:31] will retry after 953.248535ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:30.373950   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:30.373989   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:30.373998   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:30.374017   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:30.374028   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:30.374048   58823 retry.go:31] will retry after 1.291166145s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:31.671462   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:31.671516   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:31.671526   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:31.671537   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:31.671546   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:31.671565   58823 retry.go:31] will retry after 1.413833897s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:33.713596   58676 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002646 seconds
	I1101 01:06:33.713733   58676 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 01:06:33.731994   58676 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 01:06:34.275298   58676 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 01:06:34.275497   58676 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-008483 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 01:06:34.792259   58676 kubeadm.go:322] [bootstrap-token] Using token: ft1765.cra2ecqpjz8r5s0a
	I1101 01:06:34.793944   58676 out.go:204]   - Configuring RBAC rules ...
	I1101 01:06:34.794105   58676 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 01:06:34.800902   58676 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 01:06:34.811310   58676 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 01:06:34.821309   58676 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 01:06:34.826523   58676 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 01:06:34.832305   58676 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 01:06:34.852131   58676 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 01:06:35.137771   58676 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1101 01:06:35.206006   58676 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1101 01:06:35.207223   58676 kubeadm.go:322] 
	I1101 01:06:35.207316   58676 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1101 01:06:35.207327   58676 kubeadm.go:322] 
	I1101 01:06:35.207404   58676 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1101 01:06:35.207413   58676 kubeadm.go:322] 
	I1101 01:06:35.207448   58676 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1101 01:06:35.207528   58676 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 01:06:35.207619   58676 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 01:06:35.207640   58676 kubeadm.go:322] 
	I1101 01:06:35.207703   58676 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1101 01:06:35.207722   58676 kubeadm.go:322] 
	I1101 01:06:35.207796   58676 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 01:06:35.207805   58676 kubeadm.go:322] 
	I1101 01:06:35.207878   58676 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1101 01:06:35.208001   58676 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 01:06:35.208102   58676 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 01:06:35.208111   58676 kubeadm.go:322] 
	I1101 01:06:35.208207   58676 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 01:06:35.208314   58676 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1101 01:06:35.208337   58676 kubeadm.go:322] 
	I1101 01:06:35.208459   58676 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ft1765.cra2ecqpjz8r5s0a \
	I1101 01:06:35.208636   58676 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 \
	I1101 01:06:35.208674   58676 kubeadm.go:322] 	--control-plane 
	I1101 01:06:35.208687   58676 kubeadm.go:322] 
	I1101 01:06:35.208812   58676 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1101 01:06:35.208823   58676 kubeadm.go:322] 
	I1101 01:06:35.208936   58676 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ft1765.cra2ecqpjz8r5s0a \
	I1101 01:06:35.209057   58676 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4fb403e9e71894c9a227672aa4edde53e35b99e0756bc0898b68b1c18b30c724 
	I1101 01:06:35.209758   58676 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 01:06:35.209780   58676 cni.go:84] Creating CNI manager for ""
	I1101 01:06:35.209790   58676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 01:06:35.211735   58676 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 01:06:35.213123   58676 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 01:06:35.235025   58676 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 01:06:35.271015   58676 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 01:06:35.271092   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9 minikube.k8s.io/name=no-preload-008483 minikube.k8s.io/updated_at=2023_11_01T01_06_35_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:35.271099   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:35.305061   58676 ops.go:34] apiserver oom_adj: -16
	I1101 01:06:35.663339   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:35.805680   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:33.090990   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:33.091030   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:33.091038   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:33.091049   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:33.091060   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:33.091078   58823 retry.go:31] will retry after 2.252641435s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:35.350673   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:35.350703   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:35.350711   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:35.350722   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:35.350735   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:35.350753   58823 retry.go:31] will retry after 2.131984659s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:36.402770   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:36.902353   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:37.402763   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:37.902598   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:38.401883   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:38.902775   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:39.402062   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:39.902544   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:40.402350   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:40.901853   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:37.489100   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:37.489127   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:37.489132   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:37.489141   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:37.489151   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:37.489169   58823 retry.go:31] will retry after 3.273821759s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:40.767389   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:40.767409   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:40.767414   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:40.767421   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:40.767427   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:40.767441   58823 retry.go:31] will retry after 4.351278698s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:41.402632   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:41.901859   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:42.402379   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:42.902816   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:43.402503   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:43.902158   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:44.402562   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:44.901867   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:45.401852   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:45.902865   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:45.124108   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:45.124138   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:45.124147   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:45.124158   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:45.124166   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:45.124184   58823 retry.go:31] will retry after 4.53047058s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:46.402463   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:46.902480   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:47.402022   58676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:06:47.568628   58676 kubeadm.go:1081] duration metric: took 12.297606595s to wait for elevateKubeSystemPrivileges.
	I1101 01:06:47.568672   58676 kubeadm.go:406] StartCluster complete in 5m8.570526689s
	I1101 01:06:47.568696   58676 settings.go:142] acquiring lock: {Name:mk7f269e64dfd8d176737f993e01f6e6badafbc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:06:47.568787   58676 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 01:06:47.570839   58676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7305/kubeconfig: {Name:mk08da65b6c71084e1cfafb19800038e8c8303e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:06:47.571093   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 01:06:47.571207   58676 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1101 01:06:47.571281   58676 addons.go:69] Setting storage-provisioner=true in profile "no-preload-008483"
	I1101 01:06:47.571307   58676 addons.go:69] Setting metrics-server=true in profile "no-preload-008483"
	I1101 01:06:47.571329   58676 addons.go:231] Setting addon metrics-server=true in "no-preload-008483"
	I1101 01:06:47.571345   58676 config.go:182] Loaded profile config "no-preload-008483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:06:47.571360   58676 addons.go:69] Setting default-storageclass=true in profile "no-preload-008483"
	I1101 01:06:47.571369   58676 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-008483"
	W1101 01:06:47.571348   58676 addons.go:240] addon metrics-server should already be in state true
	I1101 01:06:47.571441   58676 host.go:66] Checking if "no-preload-008483" exists ...
	I1101 01:06:47.571312   58676 addons.go:231] Setting addon storage-provisioner=true in "no-preload-008483"
	W1101 01:06:47.571490   58676 addons.go:240] addon storage-provisioner should already be in state true
	I1101 01:06:47.571527   58676 host.go:66] Checking if "no-preload-008483" exists ...
	I1101 01:06:47.571816   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:47.571815   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:47.571873   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:47.571892   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:47.571873   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:47.572006   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:47.590259   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39063
	I1101 01:06:47.590724   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:47.591055   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39635
	I1101 01:06:47.591202   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:06:47.591220   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:47.591229   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46549
	I1101 01:06:47.591621   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:47.591707   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:47.591743   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:47.592428   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:47.592471   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:47.592794   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:06:47.592808   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:06:47.592822   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:47.592826   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:47.593236   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:47.593283   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:47.593437   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetState
	I1101 01:06:47.593927   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:47.593966   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:47.598345   58676 addons.go:231] Setting addon default-storageclass=true in "no-preload-008483"
	W1101 01:06:47.598381   58676 addons.go:240] addon default-storageclass should already be in state true
	I1101 01:06:47.598413   58676 host.go:66] Checking if "no-preload-008483" exists ...
	I1101 01:06:47.598819   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:47.598871   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:47.613965   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43751
	I1101 01:06:47.614004   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40855
	I1101 01:06:47.614542   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:47.614669   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:47.615105   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:06:47.615121   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:47.615151   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:06:47.615189   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:47.615476   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:47.615537   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:47.615690   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetState
	I1101 01:06:47.615767   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetState
	I1101 01:06:47.617847   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:06:47.620144   58676 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:06:47.618264   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45253
	I1101 01:06:47.618444   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:06:47.621319   58676 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-008483" context rescaled to 1 replicas
	I1101 01:06:47.621520   58676 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.140 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 01:06:47.623048   58676 out.go:177] * Verifying Kubernetes components...
	I1101 01:06:47.621641   58676 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:06:47.621894   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:47.625008   58676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 01:06:47.625024   58676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:06:47.626461   58676 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1101 01:06:47.628411   58676 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 01:06:47.628425   58676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 01:06:47.628439   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:06:47.626617   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:06:47.627063   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:06:47.628510   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:47.628907   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:47.629438   58676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17486-7305/.minikube/bin/docker-machine-driver-kvm2
	I1101 01:06:47.629480   58676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 01:06:47.631968   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:06:47.632175   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:06:47.632212   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:06:47.632315   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:06:47.632508   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:06:47.632679   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:06:47.632739   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:06:47.632795   58676 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa Username:docker}
	I1101 01:06:47.633383   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:06:47.633403   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:06:47.633427   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:06:47.633584   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:06:47.633708   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:06:47.633891   58676 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa Username:docker}
	I1101 01:06:47.650937   58676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41791
	I1101 01:06:47.651372   58676 main.go:141] libmachine: () Calling .GetVersion
	I1101 01:06:47.651921   58676 main.go:141] libmachine: Using API Version  1
	I1101 01:06:47.651956   58676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 01:06:47.652322   58676 main.go:141] libmachine: () Calling .GetMachineName
	I1101 01:06:47.652536   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetState
	I1101 01:06:47.654393   58676 main.go:141] libmachine: (no-preload-008483) Calling .DriverName
	I1101 01:06:47.654706   58676 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 01:06:47.654721   58676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 01:06:47.654743   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHHostname
	I1101 01:06:47.657743   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:06:47.658176   58676 main.go:141] libmachine: (no-preload-008483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:aa:b5", ip: ""} in network mk-no-preload-008483: {Iface:virbr2 ExpiryTime:2023-11-01 02:01:09 +0000 UTC Type:0 Mac:52:54:00:6c:aa:b5 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:no-preload-008483 Clientid:01:52:54:00:6c:aa:b5}
	I1101 01:06:47.658204   58676 main.go:141] libmachine: (no-preload-008483) DBG | domain no-preload-008483 has defined IP address 192.168.50.140 and MAC address 52:54:00:6c:aa:b5 in network mk-no-preload-008483
	I1101 01:06:47.658448   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHPort
	I1101 01:06:47.658673   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHKeyPath
	I1101 01:06:47.658836   58676 main.go:141] libmachine: (no-preload-008483) Calling .GetSSHUsername
	I1101 01:06:47.659008   58676 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/no-preload-008483/id_rsa Username:docker}
	I1101 01:06:47.808648   58676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:06:47.837158   58676 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 01:06:47.837181   58676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1101 01:06:47.846004   58676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 01:06:47.882427   58676 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 01:06:47.882454   58676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 01:06:47.899419   58676 node_ready.go:35] waiting up to 6m0s for node "no-preload-008483" to be "Ready" ...
	I1101 01:06:47.899496   58676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 01:06:47.919788   58676 node_ready.go:49] node "no-preload-008483" has status "Ready":"True"
	I1101 01:06:47.919821   58676 node_ready.go:38] duration metric: took 20.370648ms waiting for node "no-preload-008483" to be "Ready" ...
	I1101 01:06:47.919836   58676 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:06:47.926205   58676 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:06:47.926232   58676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 01:06:47.930715   58676 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5tp9h" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:47.982413   58676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 01:06:49.813480   58676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.004790768s)
	I1101 01:06:49.813519   58676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.967476056s)
	I1101 01:06:49.813564   58676 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:49.813588   58676 main.go:141] libmachine: (no-preload-008483) Calling .Close
	I1101 01:06:49.813528   58676 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:49.813617   58676 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.914052615s)
	I1101 01:06:49.813634   58676 main.go:141] libmachine: (no-preload-008483) Calling .Close
	I1101 01:06:49.813643   58676 start.go:926] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1101 01:06:49.813924   58676 main.go:141] libmachine: (no-preload-008483) DBG | Closing plugin on server side
	I1101 01:06:49.813935   58676 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:49.813956   58676 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:49.813970   58676 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:49.813979   58676 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:49.813980   58676 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:49.813990   58676 main.go:141] libmachine: (no-preload-008483) Calling .Close
	I1101 01:06:49.813991   58676 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:49.814014   58676 main.go:141] libmachine: (no-preload-008483) Calling .Close
	I1101 01:06:49.814239   58676 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:49.814258   58676 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:49.814321   58676 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:49.814339   58676 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:49.857721   58676 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:49.857749   58676 main.go:141] libmachine: (no-preload-008483) Calling .Close
	I1101 01:06:49.858034   58676 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:49.858053   58676 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:50.026844   58676 pod_ready.go:97] error getting pod "coredns-5dd5756b68-5tp9h" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-5tp9h" not found
	I1101 01:06:50.026876   58676 pod_ready.go:81] duration metric: took 2.096134316s waiting for pod "coredns-5dd5756b68-5tp9h" in "kube-system" namespace to be "Ready" ...
	E1101 01:06:50.026888   58676 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-5tp9h" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-5tp9h" not found
	I1101 01:06:50.026898   58676 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-m8v7v" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:50.204452   58676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.22199218s)
	I1101 01:06:50.204543   58676 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:50.204561   58676 main.go:141] libmachine: (no-preload-008483) Calling .Close
	I1101 01:06:50.204896   58676 main.go:141] libmachine: (no-preload-008483) DBG | Closing plugin on server side
	I1101 01:06:50.204985   58676 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:50.205017   58676 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:50.205046   58676 main.go:141] libmachine: Making call to close driver server
	I1101 01:06:50.205064   58676 main.go:141] libmachine: (no-preload-008483) Calling .Close
	I1101 01:06:50.205339   58676 main.go:141] libmachine: Successfully made call to close driver server
	I1101 01:06:50.205360   58676 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 01:06:50.205371   58676 addons.go:467] Verifying addon metrics-server=true in "no-preload-008483"
	I1101 01:06:50.205393   58676 main.go:141] libmachine: (no-preload-008483) DBG | Closing plugin on server side
	I1101 01:06:50.207552   58676 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1101 01:06:50.208879   58676 addons.go:502] enable addons completed in 2.637673191s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1101 01:06:49.663546   58823 system_pods.go:86] 4 kube-system pods found
	I1101 01:06:49.663578   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:49.663585   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:49.663595   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:49.663604   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:49.663623   58823 retry.go:31] will retry after 5.557220121s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:06:52.106184   58676 pod_ready.go:92] pod "coredns-5dd5756b68-m8v7v" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:52.106208   58676 pod_ready.go:81] duration metric: took 2.079304042s waiting for pod "coredns-5dd5756b68-m8v7v" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.106218   58676 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.112508   58676 pod_ready.go:92] pod "etcd-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:52.112531   58676 pod_ready.go:81] duration metric: took 6.307404ms waiting for pod "etcd-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.112540   58676 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.119263   58676 pod_ready.go:92] pod "kube-apiserver-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:52.119296   58676 pod_ready.go:81] duration metric: took 6.748553ms waiting for pod "kube-apiserver-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.119311   58676 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.125594   58676 pod_ready.go:92] pod "kube-controller-manager-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:52.125619   58676 pod_ready.go:81] duration metric: took 6.30051ms waiting for pod "kube-controller-manager-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.125629   58676 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4cx5t" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.503777   58676 pod_ready.go:92] pod "kube-proxy-4cx5t" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:52.503802   58676 pod_ready.go:81] duration metric: took 378.166648ms waiting for pod "kube-proxy-4cx5t" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.503811   58676 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.904254   58676 pod_ready.go:92] pod "kube-scheduler-no-preload-008483" in "kube-system" namespace has status "Ready":"True"
	I1101 01:06:52.904275   58676 pod_ready.go:81] duration metric: took 400.457426ms waiting for pod "kube-scheduler-no-preload-008483" in "kube-system" namespace to be "Ready" ...
	I1101 01:06:52.904284   58676 pod_ready.go:38] duration metric: took 4.984437509s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:06:52.904303   58676 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:06:52.904352   58676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:06:52.917549   58676 api_server.go:72] duration metric: took 5.295984843s to wait for apiserver process to appear ...
	I1101 01:06:52.917576   58676 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:06:52.917595   58676 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I1101 01:06:52.926515   58676 api_server.go:279] https://192.168.50.140:8443/healthz returned 200:
	ok
	I1101 01:06:52.927673   58676 api_server.go:141] control plane version: v1.28.3
	I1101 01:06:52.927692   58676 api_server.go:131] duration metric: took 10.109726ms to wait for apiserver health ...
	I1101 01:06:52.927700   58676 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:06:53.109620   58676 system_pods.go:59] 8 kube-system pods found
	I1101 01:06:53.109648   58676 system_pods.go:61] "coredns-5dd5756b68-m8v7v" [351a9458-075b-40d1-96d1-86a450a99251] Running
	I1101 01:06:53.109653   58676 system_pods.go:61] "etcd-no-preload-008483" [e1db4a59-f5e6-4114-a942-1faf4ff84af2] Running
	I1101 01:06:53.109657   58676 system_pods.go:61] "kube-apiserver-no-preload-008483" [f8f8bb39-3093-44bb-8255-5a7d78437a75] Running
	I1101 01:06:53.109661   58676 system_pods.go:61] "kube-controller-manager-no-preload-008483" [a45df9e4-3399-4c21-981f-3c3caaed52a8] Running
	I1101 01:06:53.109665   58676 system_pods.go:61] "kube-proxy-4cx5t" [57c1e87a-aa14-440d-9001-a6ba2ab7c8c6] Running
	I1101 01:06:53.109670   58676 system_pods.go:61] "kube-scheduler-no-preload-008483" [329b7a2d-6146-4e08-910e-ed4d40f57dcb] Running
	I1101 01:06:53.109676   58676 system_pods.go:61] "metrics-server-57f55c9bc5-qcxt7" [bf444b92-dd54-43fc-a9a8-0e9000b562e3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:53.109684   58676 system_pods.go:61] "storage-provisioner" [909163da-9021-4cee-9a72-1bc9b6ae9390] Running
	I1101 01:06:53.109693   58676 system_pods.go:74] duration metric: took 181.986766ms to wait for pod list to return data ...
	I1101 01:06:53.109706   58676 default_sa.go:34] waiting for default service account to be created ...
	I1101 01:06:53.305872   58676 default_sa.go:45] found service account: "default"
	I1101 01:06:53.305904   58676 default_sa.go:55] duration metric: took 196.187269ms for default service account to be created ...
	I1101 01:06:53.305919   58676 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 01:06:53.506566   58676 system_pods.go:86] 8 kube-system pods found
	I1101 01:06:53.506601   58676 system_pods.go:89] "coredns-5dd5756b68-m8v7v" [351a9458-075b-40d1-96d1-86a450a99251] Running
	I1101 01:06:53.506610   58676 system_pods.go:89] "etcd-no-preload-008483" [e1db4a59-f5e6-4114-a942-1faf4ff84af2] Running
	I1101 01:06:53.506618   58676 system_pods.go:89] "kube-apiserver-no-preload-008483" [f8f8bb39-3093-44bb-8255-5a7d78437a75] Running
	I1101 01:06:53.506625   58676 system_pods.go:89] "kube-controller-manager-no-preload-008483" [a45df9e4-3399-4c21-981f-3c3caaed52a8] Running
	I1101 01:06:53.506631   58676 system_pods.go:89] "kube-proxy-4cx5t" [57c1e87a-aa14-440d-9001-a6ba2ab7c8c6] Running
	I1101 01:06:53.506640   58676 system_pods.go:89] "kube-scheduler-no-preload-008483" [329b7a2d-6146-4e08-910e-ed4d40f57dcb] Running
	I1101 01:06:53.506651   58676 system_pods.go:89] "metrics-server-57f55c9bc5-qcxt7" [bf444b92-dd54-43fc-a9a8-0e9000b562e3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:53.506664   58676 system_pods.go:89] "storage-provisioner" [909163da-9021-4cee-9a72-1bc9b6ae9390] Running
	I1101 01:06:53.506675   58676 system_pods.go:126] duration metric: took 200.749464ms to wait for k8s-apps to be running ...
	I1101 01:06:53.506692   58676 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 01:06:53.506747   58676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:06:53.519471   58676 system_svc.go:56] duration metric: took 12.766173ms WaitForService to wait for kubelet.
	I1101 01:06:53.519502   58676 kubeadm.go:581] duration metric: took 5.897944072s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1101 01:06:53.519525   58676 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:06:53.705460   58676 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:06:53.705490   58676 node_conditions.go:123] node cpu capacity is 2
	I1101 01:06:53.705501   58676 node_conditions.go:105] duration metric: took 185.970851ms to run NodePressure ...
	I1101 01:06:53.705515   58676 start.go:228] waiting for startup goroutines ...
	I1101 01:06:53.705523   58676 start.go:233] waiting for cluster config update ...
	I1101 01:06:53.705537   58676 start.go:242] writing updated cluster config ...
	I1101 01:06:53.705824   58676 ssh_runner.go:195] Run: rm -f paused
	I1101 01:06:53.758508   58676 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1101 01:06:53.761998   58676 out.go:177] * Done! kubectl is now configured to use "no-preload-008483" cluster and "default" namespace by default
	I1101 01:06:55.226416   58823 system_pods.go:86] 5 kube-system pods found
	I1101 01:06:55.226443   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:06:55.226449   58823 system_pods.go:89] "kube-apiserver-old-k8s-version-330042" [1d813832-7c56-439f-aee9-c5c326e6cd3d] Pending
	I1101 01:06:55.226453   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:06:55.226460   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:06:55.226466   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:06:55.226480   58823 retry.go:31] will retry after 6.901184226s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1101 01:07:02.133379   58823 system_pods.go:86] 5 kube-system pods found
	I1101 01:07:02.133412   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:07:02.133421   58823 system_pods.go:89] "kube-apiserver-old-k8s-version-330042" [1d813832-7c56-439f-aee9-c5c326e6cd3d] Running
	I1101 01:07:02.133427   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:07:02.133442   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:07:02.133451   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:07:02.133471   58823 retry.go:31] will retry after 10.272464072s: missing components: etcd, kube-controller-manager, kube-scheduler
	I1101 01:07:12.412133   58823 system_pods.go:86] 5 kube-system pods found
	I1101 01:07:12.412166   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:07:12.412175   58823 system_pods.go:89] "kube-apiserver-old-k8s-version-330042" [1d813832-7c56-439f-aee9-c5c326e6cd3d] Running
	I1101 01:07:12.412181   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:07:12.412193   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:07:12.412202   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:07:12.412221   58823 retry.go:31] will retry after 11.290918588s: missing components: etcd, kube-controller-manager, kube-scheduler
	I1101 01:07:23.709462   58823 system_pods.go:86] 8 kube-system pods found
	I1101 01:07:23.709495   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:07:23.709503   58823 system_pods.go:89] "etcd-old-k8s-version-330042" [fc62fe53-9611-4b3d-9dca-a30d58618b2b] Running
	I1101 01:07:23.709510   58823 system_pods.go:89] "kube-apiserver-old-k8s-version-330042" [1d813832-7c56-439f-aee9-c5c326e6cd3d] Running
	I1101 01:07:23.709517   58823 system_pods.go:89] "kube-controller-manager-old-k8s-version-330042" [8ad0ccf9-fa8e-4205-b89c-f5f57cb7be6e] Running
	I1101 01:07:23.709524   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:07:23.709528   58823 system_pods.go:89] "kube-scheduler-old-k8s-version-330042" [2b077f6b-8077-4ccb-93c2-c6d3383b1113] Pending
	I1101 01:07:23.709534   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:07:23.709543   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:07:23.709559   58823 retry.go:31] will retry after 12.900513481s: missing components: kube-scheduler
	I1101 01:07:36.615720   58823 system_pods.go:86] 8 kube-system pods found
	I1101 01:07:36.615746   58823 system_pods.go:89] "coredns-5644d7b6d9-v2xlz" [36626c20-6011-458b-a4a0-3b20dd0a2d7d] Running
	I1101 01:07:36.615751   58823 system_pods.go:89] "etcd-old-k8s-version-330042" [fc62fe53-9611-4b3d-9dca-a30d58618b2b] Running
	I1101 01:07:36.615756   58823 system_pods.go:89] "kube-apiserver-old-k8s-version-330042" [1d813832-7c56-439f-aee9-c5c326e6cd3d] Running
	I1101 01:07:36.615760   58823 system_pods.go:89] "kube-controller-manager-old-k8s-version-330042" [8ad0ccf9-fa8e-4205-b89c-f5f57cb7be6e] Running
	I1101 01:07:36.615763   58823 system_pods.go:89] "kube-proxy-hkl2m" [ea52a4a6-d4d0-4ffe-892b-57869eddeb19] Running
	I1101 01:07:36.615767   58823 system_pods.go:89] "kube-scheduler-old-k8s-version-330042" [2b077f6b-8077-4ccb-93c2-c6d3383b1113] Running
	I1101 01:07:36.615774   58823 system_pods.go:89] "metrics-server-74d5856cc6-m5v28" [df9123d5-270d-4eac-8801-b4ef14c72ce0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 01:07:36.615780   58823 system_pods.go:89] "storage-provisioner" [1dd1f9a9-5780-44ca-b917-4262b661d705] Running
	I1101 01:07:36.615787   58823 system_pods.go:126] duration metric: took 1m10.123228938s to wait for k8s-apps to be running ...
	I1101 01:07:36.615793   58823 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 01:07:36.615837   58823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:07:36.634354   58823 system_svc.go:56] duration metric: took 18.547208ms WaitForService to wait for kubelet.
	I1101 01:07:36.634387   58823 kubeadm.go:581] duration metric: took 1m19.896166299s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1101 01:07:36.634412   58823 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:07:36.638286   58823 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 01:07:36.638315   58823 node_conditions.go:123] node cpu capacity is 2
	I1101 01:07:36.638329   58823 node_conditions.go:105] duration metric: took 3.911826ms to run NodePressure ...
	I1101 01:07:36.638344   58823 start.go:228] waiting for startup goroutines ...
	I1101 01:07:36.638351   58823 start.go:233] waiting for cluster config update ...
	I1101 01:07:36.638365   58823 start.go:242] writing updated cluster config ...
	I1101 01:07:36.638658   58823 ssh_runner.go:195] Run: rm -f paused
	I1101 01:07:36.688409   58823 start.go:600] kubectl: 1.28.3, cluster: 1.16.0 (minor skew: 12)
	I1101 01:07:36.690520   58823 out.go:177] 
	W1101 01:07:36.692006   58823 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.16.0.
	I1101 01:07:36.693512   58823 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1101 01:07:36.694940   58823 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-330042" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-11-01 01:00:27 UTC, ends at Wed 2023-11-01 01:20:24 UTC. --
	Nov 01 01:20:24 old-k8s-version-330042 crio[712]: time="2023-11-01 01:20:24.074606711Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=975820e1-1776-4956-9172-2d66b2ec226b name=/runtime.v1.RuntimeService/Version
	Nov 01 01:20:24 old-k8s-version-330042 crio[712]: time="2023-11-01 01:20:24.075545604Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6d856dbc-d02a-4918-b34f-c79cd22fd894 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:20:24 old-k8s-version-330042 crio[712]: time="2023-11-01 01:20:24.075949764Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698801624075934478,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=6d856dbc-d02a-4918-b34f-c79cd22fd894 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:20:24 old-k8s-version-330042 crio[712]: time="2023-11-01 01:20:24.076485070Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=de28a32b-64cc-43be-89e4-2960336efbb3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:20:24 old-k8s-version-330042 crio[712]: time="2023-11-01 01:20:24.076555049Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=de28a32b-64cc-43be-89e4-2960336efbb3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:20:24 old-k8s-version-330042 crio[712]: time="2023-11-01 01:20:24.076715252Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47e9d096bab623b41a163040180ac989be703b43ed4158dcada9550cc356baa9,PodSandboxId:2bc0679301c92cfefb4fc946b72ac70b853adec0652e63faad70865a6e3e089a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698800779724593813,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dd1f9a9-5780-44ca-b917-4262b661d705,},Annotations:map[string]string{io.kubernetes.container.hash: d3681a08,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84aa2ac725186095a8531ea178ce728ecdc22eb3a5421d8a7793c380fd0b91db,PodSandboxId:fe82f92e3388f19a12451370d3b51420c9825b83e5d3121a1746fda4129e6e4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1698800778947822613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-v2xlz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36626c20-6011-458b-a4a0-3b20dd0a2d7d,},Annotations:map[string]string{io.kubernetes.container.hash: 9de5e7d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cea2a8f7c286807900a09209205034982d97ab11615435f0759431aa7dbb1cf,PodSandboxId:4fbda9ea40dbabd32abb80de20e1cbcb8132cd9236bc271e994c1073123cf8f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1698800778369009032,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hkl2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea52a
4a6-d4d0-4ffe-892b-57869eddeb19,},Annotations:map[string]string{io.kubernetes.container.hash: 3b669d41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d7a337b3484e33b88c27bb98b7190ca96a1228f4be92caf932b3ad008d9c1a1,PodSandboxId:e95d110a3c8bbfb2defb6c7b519f669f7b828ba07a94fa43130175e79f65246c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1698800752089986016,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e364ddc19ecd628024c426c1c99940aa,},Annotations:map[s
tring]string{io.kubernetes.container.hash: aac46c06,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c72bd307dc95d8a490f8ce186c9e0fd7d636bd82e0b07ae130b68caa14fa8ef,PodSandboxId:d1deaf65d94fa2b0967a9422ded210de010414fe098352937de22790ee3ef39e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1698800750901816502,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aab85e7b72354e61671d1808369ec300,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 464e7b7e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6dfc40f684bae2ed88ef5e956c1cc1b727a6db7dd14095504543757767d170f,PodSandboxId:64bcf463ea572198a70b221d1472e002c43c80cb8ca5a7bb3b833fe920a08491,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1698800750828482948,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubern
etes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2112d10fa84b67324dec92087b40581b328307a5cb69e922e1c3a8a63343920c,PodSandboxId:6970e3a8abc6a6a707074731218947867e4bd7285ab87c10ea35079c3640755d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1698800750747524609,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=de28a32b-64cc-43be-89e4-2960336efbb3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:20:24 old-k8s-version-330042 crio[712]: time="2023-11-01 01:20:24.115609456Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d23ed958-7ba9-4d5d-a0c9-14d0fbffcea8 name=/runtime.v1.RuntimeService/Version
	Nov 01 01:20:24 old-k8s-version-330042 crio[712]: time="2023-11-01 01:20:24.115687647Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d23ed958-7ba9-4d5d-a0c9-14d0fbffcea8 name=/runtime.v1.RuntimeService/Version
	Nov 01 01:20:24 old-k8s-version-330042 crio[712]: time="2023-11-01 01:20:24.116743432Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=fb4e816c-4e24-492d-aa97-a4ed51295cd4 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:20:24 old-k8s-version-330042 crio[712]: time="2023-11-01 01:20:24.117268433Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698801624117253876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=fb4e816c-4e24-492d-aa97-a4ed51295cd4 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:20:24 old-k8s-version-330042 crio[712]: time="2023-11-01 01:20:24.117722002Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4bf511d5-5a23-4e9c-8055-a9308176c2ca name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:20:24 old-k8s-version-330042 crio[712]: time="2023-11-01 01:20:24.117796598Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4bf511d5-5a23-4e9c-8055-a9308176c2ca name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:20:24 old-k8s-version-330042 crio[712]: time="2023-11-01 01:20:24.117964769Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47e9d096bab623b41a163040180ac989be703b43ed4158dcada9550cc356baa9,PodSandboxId:2bc0679301c92cfefb4fc946b72ac70b853adec0652e63faad70865a6e3e089a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698800779724593813,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dd1f9a9-5780-44ca-b917-4262b661d705,},Annotations:map[string]string{io.kubernetes.container.hash: d3681a08,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84aa2ac725186095a8531ea178ce728ecdc22eb3a5421d8a7793c380fd0b91db,PodSandboxId:fe82f92e3388f19a12451370d3b51420c9825b83e5d3121a1746fda4129e6e4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1698800778947822613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-v2xlz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36626c20-6011-458b-a4a0-3b20dd0a2d7d,},Annotations:map[string]string{io.kubernetes.container.hash: 9de5e7d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cea2a8f7c286807900a09209205034982d97ab11615435f0759431aa7dbb1cf,PodSandboxId:4fbda9ea40dbabd32abb80de20e1cbcb8132cd9236bc271e994c1073123cf8f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1698800778369009032,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hkl2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea52a
4a6-d4d0-4ffe-892b-57869eddeb19,},Annotations:map[string]string{io.kubernetes.container.hash: 3b669d41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d7a337b3484e33b88c27bb98b7190ca96a1228f4be92caf932b3ad008d9c1a1,PodSandboxId:e95d110a3c8bbfb2defb6c7b519f669f7b828ba07a94fa43130175e79f65246c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1698800752089986016,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e364ddc19ecd628024c426c1c99940aa,},Annotations:map[s
tring]string{io.kubernetes.container.hash: aac46c06,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c72bd307dc95d8a490f8ce186c9e0fd7d636bd82e0b07ae130b68caa14fa8ef,PodSandboxId:d1deaf65d94fa2b0967a9422ded210de010414fe098352937de22790ee3ef39e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1698800750901816502,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aab85e7b72354e61671d1808369ec300,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 464e7b7e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6dfc40f684bae2ed88ef5e956c1cc1b727a6db7dd14095504543757767d170f,PodSandboxId:64bcf463ea572198a70b221d1472e002c43c80cb8ca5a7bb3b833fe920a08491,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1698800750828482948,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubern
etes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2112d10fa84b67324dec92087b40581b328307a5cb69e922e1c3a8a63343920c,PodSandboxId:6970e3a8abc6a6a707074731218947867e4bd7285ab87c10ea35079c3640755d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1698800750747524609,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4bf511d5-5a23-4e9c-8055-a9308176c2ca name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:20:24 old-k8s-version-330042 crio[712]: time="2023-11-01 01:20:24.152359978Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=06b76d5e-6b90-4838-84e4-ac58c11928ac name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Nov 01 01:20:24 old-k8s-version-330042 crio[712]: time="2023-11-01 01:20:24.152620461Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:7e04121e7702a139845d102c9015efa9c12288cc6f10c0b394b5229c0ed7ee29,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5856cc6-m5v28,Uid:df9123d5-270d-4eac-8801-b4ef14c72ce0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698800779355736013,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5856cc6-m5v28,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df9123d5-270d-4eac-8801-b4ef14c72ce0,k8s-app: metrics-server,pod-template-hash: 74d5856cc6,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-01T01:06:19.009450675Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2bc0679301c92cfefb4fc946b72ac70b853adec0652e63faad70865a6e3e089a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1dd1f9a9-5780-44ca-b917-4262b661d7
05,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698800779132007730,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dd1f9a9-5780-44ca-b917-4262b661d705,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\
"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-11-01T01:06:17.88153306Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fe82f92e3388f19a12451370d3b51420c9825b83e5d3121a1746fda4129e6e4c,Metadata:&PodSandboxMetadata{Name:coredns-5644d7b6d9-v2xlz,Uid:36626c20-6011-458b-a4a0-3b20dd0a2d7d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698800778736355007,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5644d7b6d9-v2xlz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36626c20-6011-458b-a4a0-3b20dd0a2d7d,k8s-app: kube-dns,pod-template-hash: 5644d7b6d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-01T01:06:18.389957639Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4fbda9ea40dbabd32abb80de20e1cbcb8132cd9236bc271e994c1073123cf8f9,Metadata:&PodSandboxMetadata{Name:kube-proxy-hkl2m,Uid:ea52a4a6-d4d0-4ffe-892b
-57869eddeb19,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698800776856377475,Labels:map[string]string{controller-revision-hash: 68594d95c,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-hkl2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea52a4a6-d4d0-4ffe-892b-57869eddeb19,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-01T01:06:16.501178617Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:64bcf463ea572198a70b221d1472e002c43c80cb8ca5a7bb3b833fe920a08491,Metadata:&PodSandboxMetadata{Name:kube-scheduler-old-k8s-version-330042,Uid:b3d303074fe0ca1d42a8bd9ed248df09,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698800750138206653,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1
d42a8bd9ed248df09,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b3d303074fe0ca1d42a8bd9ed248df09,kubernetes.io/config.seen: 2023-11-01T01:05:49.721679819Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6970e3a8abc6a6a707074731218947867e4bd7285ab87c10ea35079c3640755d,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-old-k8s-version-330042,Uid:7376ddb4f190a0ded9394063437bcb4e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698800750133193370,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7376ddb4f190a0ded9394063437bcb4e,kubernetes.io/config.seen: 2023-11-01T01:05:49.718683781Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id
:e95d110a3c8bbfb2defb6c7b519f669f7b828ba07a94fa43130175e79f65246c,Metadata:&PodSandboxMetadata{Name:etcd-old-k8s-version-330042,Uid:e364ddc19ecd628024c426c1c99940aa,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698800750111824889,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e364ddc19ecd628024c426c1c99940aa,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e364ddc19ecd628024c426c1c99940aa,kubernetes.io/config.seen: 2023-11-01T01:05:49.72332727Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d1deaf65d94fa2b0967a9422ded210de010414fe098352937de22790ee3ef39e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-old-k8s-version-330042,Uid:aab85e7b72354e61671d1808369ec300,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698800750089812181,Labels:map[string]string{component: kube-apiserver,io.k
ubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aab85e7b72354e61671d1808369ec300,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: aab85e7b72354e61671d1808369ec300,kubernetes.io/config.seen: 2023-11-01T01:05:49.717467772Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=06b76d5e-6b90-4838-84e4-ac58c11928ac name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Nov 01 01:20:24 old-k8s-version-330042 crio[712]: time="2023-11-01 01:20:24.153475668Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=dcf21645-af9f-4d5d-bdd9-e32502584564 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Nov 01 01:20:24 old-k8s-version-330042 crio[712]: time="2023-11-01 01:20:24.153529866Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=dcf21645-af9f-4d5d-bdd9-e32502584564 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Nov 01 01:20:24 old-k8s-version-330042 crio[712]: time="2023-11-01 01:20:24.153688949Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47e9d096bab623b41a163040180ac989be703b43ed4158dcada9550cc356baa9,PodSandboxId:2bc0679301c92cfefb4fc946b72ac70b853adec0652e63faad70865a6e3e089a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698800779724593813,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dd1f9a9-5780-44ca-b917-4262b661d705,},Annotations:map[string]string{io.kubernetes.container.hash: d3681a08,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84aa2ac725186095a8531ea178ce728ecdc22eb3a5421d8a7793c380fd0b91db,PodSandboxId:fe82f92e3388f19a12451370d3b51420c9825b83e5d3121a1746fda4129e6e4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1698800778947822613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-v2xlz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36626c20-6011-458b-a4a0-3b20dd0a2d7d,},Annotations:map[string]string{io.kubernetes.container.hash: 9de5e7d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cea2a8f7c286807900a09209205034982d97ab11615435f0759431aa7dbb1cf,PodSandboxId:4fbda9ea40dbabd32abb80de20e1cbcb8132cd9236bc271e994c1073123cf8f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1698800778369009032,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hkl2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea52a
4a6-d4d0-4ffe-892b-57869eddeb19,},Annotations:map[string]string{io.kubernetes.container.hash: 3b669d41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d7a337b3484e33b88c27bb98b7190ca96a1228f4be92caf932b3ad008d9c1a1,PodSandboxId:e95d110a3c8bbfb2defb6c7b519f669f7b828ba07a94fa43130175e79f65246c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1698800752089986016,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e364ddc19ecd628024c426c1c99940aa,},Annotations:map[s
tring]string{io.kubernetes.container.hash: aac46c06,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c72bd307dc95d8a490f8ce186c9e0fd7d636bd82e0b07ae130b68caa14fa8ef,PodSandboxId:d1deaf65d94fa2b0967a9422ded210de010414fe098352937de22790ee3ef39e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1698800750901816502,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aab85e7b72354e61671d1808369ec300,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 464e7b7e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6dfc40f684bae2ed88ef5e956c1cc1b727a6db7dd14095504543757767d170f,PodSandboxId:64bcf463ea572198a70b221d1472e002c43c80cb8ca5a7bb3b833fe920a08491,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1698800750828482948,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubern
etes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2112d10fa84b67324dec92087b40581b328307a5cb69e922e1c3a8a63343920c,PodSandboxId:6970e3a8abc6a6a707074731218947867e4bd7285ab87c10ea35079c3640755d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1698800750747524609,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=dcf21645-af9f-4d5d-bdd9-e32502584564 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Nov 01 01:20:24 old-k8s-version-330042 crio[712]: time="2023-11-01 01:20:24.156466927Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=86c812d0-fcdf-464f-9ed1-f220f231bf49 name=/runtime.v1.RuntimeService/Version
	Nov 01 01:20:24 old-k8s-version-330042 crio[712]: time="2023-11-01 01:20:24.156511642Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=86c812d0-fcdf-464f-9ed1-f220f231bf49 name=/runtime.v1.RuntimeService/Version
	Nov 01 01:20:24 old-k8s-version-330042 crio[712]: time="2023-11-01 01:20:24.158234171Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8f33d178-c606-4bd2-8167-a7f07eead11f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:20:24 old-k8s-version-330042 crio[712]: time="2023-11-01 01:20:24.158732558Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698801624158714530,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=8f33d178-c606-4bd2-8167-a7f07eead11f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 01:20:24 old-k8s-version-330042 crio[712]: time="2023-11-01 01:20:24.159479465Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=964d7e97-061e-4ec8-9996-92dfe0dd07f3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:20:24 old-k8s-version-330042 crio[712]: time="2023-11-01 01:20:24.159552620Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=964d7e97-061e-4ec8-9996-92dfe0dd07f3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 01:20:24 old-k8s-version-330042 crio[712]: time="2023-11-01 01:20:24.159719396Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47e9d096bab623b41a163040180ac989be703b43ed4158dcada9550cc356baa9,PodSandboxId:2bc0679301c92cfefb4fc946b72ac70b853adec0652e63faad70865a6e3e089a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698800779724593813,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dd1f9a9-5780-44ca-b917-4262b661d705,},Annotations:map[string]string{io.kubernetes.container.hash: d3681a08,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84aa2ac725186095a8531ea178ce728ecdc22eb3a5421d8a7793c380fd0b91db,PodSandboxId:fe82f92e3388f19a12451370d3b51420c9825b83e5d3121a1746fda4129e6e4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1698800778947822613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-v2xlz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36626c20-6011-458b-a4a0-3b20dd0a2d7d,},Annotations:map[string]string{io.kubernetes.container.hash: 9de5e7d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cea2a8f7c286807900a09209205034982d97ab11615435f0759431aa7dbb1cf,PodSandboxId:4fbda9ea40dbabd32abb80de20e1cbcb8132cd9236bc271e994c1073123cf8f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1698800778369009032,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hkl2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea52a
4a6-d4d0-4ffe-892b-57869eddeb19,},Annotations:map[string]string{io.kubernetes.container.hash: 3b669d41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d7a337b3484e33b88c27bb98b7190ca96a1228f4be92caf932b3ad008d9c1a1,PodSandboxId:e95d110a3c8bbfb2defb6c7b519f669f7b828ba07a94fa43130175e79f65246c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1698800752089986016,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e364ddc19ecd628024c426c1c99940aa,},Annotations:map[s
tring]string{io.kubernetes.container.hash: aac46c06,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c72bd307dc95d8a490f8ce186c9e0fd7d636bd82e0b07ae130b68caa14fa8ef,PodSandboxId:d1deaf65d94fa2b0967a9422ded210de010414fe098352937de22790ee3ef39e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1698800750901816502,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aab85e7b72354e61671d1808369ec300,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 464e7b7e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6dfc40f684bae2ed88ef5e956c1cc1b727a6db7dd14095504543757767d170f,PodSandboxId:64bcf463ea572198a70b221d1472e002c43c80cb8ca5a7bb3b833fe920a08491,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1698800750828482948,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubern
etes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2112d10fa84b67324dec92087b40581b328307a5cb69e922e1c3a8a63343920c,PodSandboxId:6970e3a8abc6a6a707074731218947867e4bd7285ab87c10ea35079c3640755d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1698800750747524609,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-330042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=964d7e97-061e-4ec8-9996-92dfe0dd07f3 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	47e9d096bab62       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   2bc0679301c92       storage-provisioner
	84aa2ac725186       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   14 minutes ago      Running             coredns                   0                   fe82f92e3388f       coredns-5644d7b6d9-v2xlz
	7cea2a8f7c286       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   14 minutes ago      Running             kube-proxy                0                   4fbda9ea40dba       kube-proxy-hkl2m
	0d7a337b3484e       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   14 minutes ago      Running             etcd                      0                   e95d110a3c8bb       etcd-old-k8s-version-330042
	9c72bd307dc95       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   14 minutes ago      Running             kube-apiserver            0                   d1deaf65d94fa       kube-apiserver-old-k8s-version-330042
	a6dfc40f684ba       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   14 minutes ago      Running             kube-scheduler            0                   64bcf463ea572       kube-scheduler-old-k8s-version-330042
	2112d10fa84b6       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   14 minutes ago      Running             kube-controller-manager   0                   6970e3a8abc6a       kube-controller-manager-old-k8s-version-330042
	
	* 
	* ==> coredns [84aa2ac725186095a8531ea178ce728ecdc22eb3a5421d8a7793c380fd0b91db] <==
	* .:53
	2023-11-01T01:06:19.212Z [INFO] plugin/reload: Running configuration MD5 = 73c7bdb6903c83cd433a46b2e9eb4233
	2023-11-01T01:06:19.212Z [INFO] CoreDNS-1.6.2
	2023-11-01T01:06:19.212Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2023-11-01T01:06:19.225Z [INFO] 127.0.0.1:55625 - 16103 "HINFO IN 3101082495356081793.6221192527272173986. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011553843s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-330042
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-330042
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9
	                    minikube.k8s.io/name=old-k8s-version-330042
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_01T01_06_01_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Nov 2023 01:05:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Nov 2023 01:20:17 +0000   Wed, 01 Nov 2023 01:05:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Nov 2023 01:20:17 +0000   Wed, 01 Nov 2023 01:05:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Nov 2023 01:20:17 +0000   Wed, 01 Nov 2023 01:05:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Nov 2023 01:20:17 +0000   Wed, 01 Nov 2023 01:05:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.90
	  Hostname:    old-k8s-version-330042
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 78eee0e52c544393a797354bd60373a7
	 System UUID:                78eee0e5-2c54-4393-a797-354bd60373a7
	 Boot ID:                    0eca7327-765c-4eae-b17e-bcbd0aff4118
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-v2xlz                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                etcd-old-k8s-version-330042                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                kube-apiserver-old-k8s-version-330042             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                kube-controller-manager-old-k8s-version-330042    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                kube-proxy-hkl2m                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                kube-scheduler-old-k8s-version-330042             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                metrics-server-74d5856cc6-m5v28                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet, old-k8s-version-330042     Node old-k8s-version-330042 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x7 over 14m)  kubelet, old-k8s-version-330042     Node old-k8s-version-330042 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x8 over 14m)  kubelet, old-k8s-version-330042     Node old-k8s-version-330042 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                kube-proxy, old-k8s-version-330042  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Nov 1 01:00] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.064201] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.477165] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.926122] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.141967] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.564915] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.603308] systemd-fstab-generator[638]: Ignoring "noauto" for root device
	[  +0.119192] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.153404] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.107213] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.235019] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[ +20.625955] systemd-fstab-generator[1030]: Ignoring "noauto" for root device
	[  +0.510958] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov 1 01:01] kauditd_printk_skb: 13 callbacks suppressed
	[ +31.144458] kauditd_printk_skb: 4 callbacks suppressed
	[Nov 1 01:05] systemd-fstab-generator[3090]: Ignoring "noauto" for root device
	[  +0.812286] kauditd_printk_skb: 6 callbacks suppressed
	[Nov 1 01:06] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [0d7a337b3484e33b88c27bb98b7190ca96a1228f4be92caf932b3ad008d9c1a1] <==
	* 2023-11-01 01:05:52.202329 I | raft: 8d381aaacda0b9bd became follower at term 0
	2023-11-01 01:05:52.202357 I | raft: newRaft 8d381aaacda0b9bd [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-11-01 01:05:52.202378 I | raft: 8d381aaacda0b9bd became follower at term 1
	2023-11-01 01:05:52.221732 W | auth: simple token is not cryptographically signed
	2023-11-01 01:05:52.227798 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-11-01 01:05:52.233543 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-11-01 01:05:52.233802 I | embed: listening for metrics on http://192.168.39.90:2381
	2023-11-01 01:05:52.234130 I | etcdserver: 8d381aaacda0b9bd as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-11-01 01:05:52.234380 I | etcdserver/membership: added member 8d381aaacda0b9bd [https://192.168.39.90:2380] to cluster 8cf3a1558a63fa9e
	2023-11-01 01:05:52.234434 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-11-01 01:05:52.903003 I | raft: 8d381aaacda0b9bd is starting a new election at term 1
	2023-11-01 01:05:52.903118 I | raft: 8d381aaacda0b9bd became candidate at term 2
	2023-11-01 01:05:52.903132 I | raft: 8d381aaacda0b9bd received MsgVoteResp from 8d381aaacda0b9bd at term 2
	2023-11-01 01:05:52.903142 I | raft: 8d381aaacda0b9bd became leader at term 2
	2023-11-01 01:05:52.903147 I | raft: raft.node: 8d381aaacda0b9bd elected leader 8d381aaacda0b9bd at term 2
	2023-11-01 01:05:52.903600 I | etcdserver: setting up the initial cluster version to 3.3
	2023-11-01 01:05:52.905020 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-11-01 01:05:52.905482 I | etcdserver: published {Name:old-k8s-version-330042 ClientURLs:[https://192.168.39.90:2379]} to cluster 8cf3a1558a63fa9e
	2023-11-01 01:05:52.905622 I | embed: ready to serve client requests
	2023-11-01 01:05:52.906884 I | embed: serving client requests on 192.168.39.90:2379
	2023-11-01 01:05:52.906961 I | embed: ready to serve client requests
	2023-11-01 01:05:52.907161 I | etcdserver/api: enabled capabilities for version 3.3
	2023-11-01 01:05:52.908241 I | embed: serving client requests on 127.0.0.1:2379
	2023-11-01 01:15:52.931227 I | mvcc: store.index: compact 661
	2023-11-01 01:15:52.932914 I | mvcc: finished scheduled compaction at 661 (took 1.288205ms)
	
	* 
	* ==> kernel <==
	*  01:20:24 up 20 min,  0 users,  load average: 0.21, 0.11, 0.10
	Linux old-k8s-version-330042 5.10.57 #1 SMP Tue Oct 31 22:14:31 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [9c72bd307dc95d8a490f8ce186c9e0fd7d636bd82e0b07ae130b68caa14fa8ef] <==
	* I1101 01:11:57.376023       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1101 01:11:57.376409       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 01:11:57.376574       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1101 01:11:57.376617       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1101 01:13:57.377248       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1101 01:13:57.377374       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 01:13:57.377440       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1101 01:13:57.377452       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1101 01:15:57.379578       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1101 01:15:57.379738       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 01:15:57.379813       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1101 01:15:57.379825       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1101 01:16:57.380348       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1101 01:16:57.380590       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 01:16:57.380694       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1101 01:16:57.380742       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1101 01:18:57.381247       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1101 01:18:57.381381       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 01:18:57.381439       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1101 01:18:57.381451       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [2112d10fa84b67324dec92087b40581b328307a5cb69e922e1c3a8a63343920c] <==
	* W1101 01:14:16.893366       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1101 01:14:20.915449       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1101 01:14:48.895379       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1101 01:14:51.167562       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1101 01:15:20.897698       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1101 01:15:21.419595       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E1101 01:15:51.671556       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1101 01:15:52.900385       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1101 01:16:21.923610       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1101 01:16:24.902233       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1101 01:16:52.176397       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1101 01:16:56.904654       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1101 01:17:22.428776       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1101 01:17:28.906705       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1101 01:17:52.681208       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1101 01:18:00.909239       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1101 01:18:22.933172       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1101 01:18:32.911589       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1101 01:18:53.185241       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1101 01:19:04.913982       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1101 01:19:23.437547       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1101 01:19:36.916595       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1101 01:19:53.689676       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1101 01:20:08.919164       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1101 01:20:23.941896       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	* 
	* ==> kube-proxy [7cea2a8f7c286807900a09209205034982d97ab11615435f0759431aa7dbb1cf] <==
	* W1101 01:06:18.716601       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1101 01:06:18.760499       1 node.go:135] Successfully retrieved node IP: 192.168.39.90
	I1101 01:06:18.760629       1 server_others.go:149] Using iptables Proxier.
	I1101 01:06:18.761597       1 server.go:529] Version: v1.16.0
	I1101 01:06:18.779304       1 config.go:313] Starting service config controller
	I1101 01:06:18.779369       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1101 01:06:18.779412       1 config.go:131] Starting endpoints config controller
	I1101 01:06:18.779443       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1101 01:06:18.879826       1 shared_informer.go:204] Caches are synced for endpoints config 
	I1101 01:06:18.879936       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [a6dfc40f684bae2ed88ef5e956c1cc1b727a6db7dd14095504543757767d170f] <==
	* I1101 01:05:56.378457       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1101 01:05:56.431332       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1101 01:05:56.436161       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1101 01:05:56.436413       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1101 01:05:56.438293       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1101 01:05:56.438390       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1101 01:05:56.438428       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1101 01:05:56.438460       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1101 01:05:56.439850       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1101 01:05:56.439929       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1101 01:05:56.440104       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1101 01:05:56.440818       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1101 01:05:57.434514       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1101 01:05:57.439382       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1101 01:05:57.441131       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1101 01:05:57.442746       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1101 01:05:57.444399       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1101 01:05:57.447201       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1101 01:05:57.448599       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1101 01:05:57.451916       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1101 01:05:57.453892       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1101 01:05:57.455142       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1101 01:05:57.456440       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1101 01:06:16.593765       1 factory.go:585] pod is already present in the activeQ
	E1101 01:06:16.728533       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-11-01 01:00:27 UTC, ends at Wed 2023-11-01 01:20:24 UTC. --
	Nov 01 01:15:53 old-k8s-version-330042 kubelet[3096]: E1101 01:15:53.523625    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:16:07 old-k8s-version-330042 kubelet[3096]: E1101 01:16:07.523769    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:16:20 old-k8s-version-330042 kubelet[3096]: E1101 01:16:20.523622    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:16:32 old-k8s-version-330042 kubelet[3096]: E1101 01:16:32.523784    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:16:44 old-k8s-version-330042 kubelet[3096]: E1101 01:16:44.523588    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:16:59 old-k8s-version-330042 kubelet[3096]: E1101 01:16:59.538588    3096 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Nov 01 01:16:59 old-k8s-version-330042 kubelet[3096]: E1101 01:16:59.538747    3096 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Nov 01 01:16:59 old-k8s-version-330042 kubelet[3096]: E1101 01:16:59.538821    3096 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Nov 01 01:16:59 old-k8s-version-330042 kubelet[3096]: E1101 01:16:59.538863    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Nov 01 01:17:10 old-k8s-version-330042 kubelet[3096]: E1101 01:17:10.523538    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:17:22 old-k8s-version-330042 kubelet[3096]: E1101 01:17:22.523584    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:17:36 old-k8s-version-330042 kubelet[3096]: E1101 01:17:36.523605    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:17:47 old-k8s-version-330042 kubelet[3096]: E1101 01:17:47.529911    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:17:58 old-k8s-version-330042 kubelet[3096]: E1101 01:17:58.524348    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:18:12 old-k8s-version-330042 kubelet[3096]: E1101 01:18:12.523694    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:18:27 old-k8s-version-330042 kubelet[3096]: E1101 01:18:27.523514    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:18:40 old-k8s-version-330042 kubelet[3096]: E1101 01:18:40.523541    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:18:52 old-k8s-version-330042 kubelet[3096]: E1101 01:18:52.523520    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:19:05 old-k8s-version-330042 kubelet[3096]: E1101 01:19:05.524330    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:19:17 old-k8s-version-330042 kubelet[3096]: E1101 01:19:17.524417    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:19:29 old-k8s-version-330042 kubelet[3096]: E1101 01:19:29.523595    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:19:40 old-k8s-version-330042 kubelet[3096]: E1101 01:19:40.523840    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:19:51 old-k8s-version-330042 kubelet[3096]: E1101 01:19:51.523402    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:20:05 old-k8s-version-330042 kubelet[3096]: E1101 01:20:05.524268    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 01:20:18 old-k8s-version-330042 kubelet[3096]: E1101 01:20:18.523812    3096 pod_workers.go:191] Error syncing pod df9123d5-270d-4eac-8801-b4ef14c72ce0 ("metrics-server-74d5856cc6-m5v28_kube-system(df9123d5-270d-4eac-8801-b4ef14c72ce0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [47e9d096bab623b41a163040180ac989be703b43ed4158dcada9550cc356baa9] <==
	* I1101 01:06:19.831663       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 01:06:19.848248       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 01:06:19.848338       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1101 01:06:19.856824       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 01:06:19.857621       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a97e94b9-9dff-4bda-b326-60eaa155914e", APIVersion:"v1", ResourceVersion:"422", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-330042_c63c0714-a303-4b29-a0ce-ec388327e4a4 became leader
	I1101 01:06:19.858123       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-330042_c63c0714-a303-4b29-a0ce-ec388327e4a4!
	I1101 01:06:19.959257       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-330042_c63c0714-a303-4b29-a0ce-ec388327e4a4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-330042 -n old-k8s-version-330042
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-330042 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-m5v28
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-330042 describe pod metrics-server-74d5856cc6-m5v28
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-330042 describe pod metrics-server-74d5856cc6-m5v28: exit status 1 (71.600039ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-m5v28" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-330042 describe pod metrics-server-74d5856cc6-m5v28: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (225.69s)

                                                
                                    

Test pass (228/292)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 22.09
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.28.3/json-events 17.56
11 TestDownloadOnly/v1.28.3/preload-exists 0
15 TestDownloadOnly/v1.28.3/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.15
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.14
19 TestBinaryMirror 0.59
20 TestOffline 132.02
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
25 TestAddons/Setup 215.11
27 TestAddons/parallel/Registry 17.01
29 TestAddons/parallel/InspektorGadget 11.26
30 TestAddons/parallel/MetricsServer 6.19
31 TestAddons/parallel/HelmTiller 19.29
33 TestAddons/parallel/CSI 53.85
34 TestAddons/parallel/Headlamp 15.36
35 TestAddons/parallel/CloudSpanner 5.73
36 TestAddons/parallel/LocalPath 23.51
37 TestAddons/parallel/NvidiaDevicePlugin 5.86
40 TestAddons/serial/GCPAuth/Namespaces 0.12
42 TestCertOptions 105.03
43 TestCertExpiration 286.53
45 TestForceSystemdFlag 75.38
46 TestForceSystemdEnv 67.18
48 TestKVMDriverInstallOrUpdate 3.23
52 TestErrorSpam/setup 45.34
53 TestErrorSpam/start 0.4
54 TestErrorSpam/status 0.81
55 TestErrorSpam/pause 1.53
56 TestErrorSpam/unpause 1.73
57 TestErrorSpam/stop 2.27
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 61.22
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 34.78
64 TestFunctional/serial/KubeContext 0.04
65 TestFunctional/serial/KubectlGetPods 0.08
68 TestFunctional/serial/CacheCmd/cache/add_remote 3.54
69 TestFunctional/serial/CacheCmd/cache/add_local 2.2
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
71 TestFunctional/serial/CacheCmd/cache/list 0.06
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
73 TestFunctional/serial/CacheCmd/cache/cache_reload 1.74
74 TestFunctional/serial/CacheCmd/cache/delete 0.12
75 TestFunctional/serial/MinikubeKubectlCmd 0.12
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
77 TestFunctional/serial/ExtraConfig 34.45
78 TestFunctional/serial/ComponentHealth 0.07
79 TestFunctional/serial/LogsCmd 1.48
80 TestFunctional/serial/LogsFileCmd 1.46
81 TestFunctional/serial/InvalidService 4.1
83 TestFunctional/parallel/ConfigCmd 0.44
84 TestFunctional/parallel/DashboardCmd 22.06
85 TestFunctional/parallel/DryRun 0.34
86 TestFunctional/parallel/InternationalLanguage 0.17
87 TestFunctional/parallel/StatusCmd 0.88
91 TestFunctional/parallel/ServiceCmdConnect 11.69
92 TestFunctional/parallel/AddonsCmd 0.2
93 TestFunctional/parallel/PersistentVolumeClaim 45.41
95 TestFunctional/parallel/SSHCmd 0.41
96 TestFunctional/parallel/CpCmd 1.1
97 TestFunctional/parallel/MySQL 31.86
98 TestFunctional/parallel/FileSync 0.3
99 TestFunctional/parallel/CertSync 1.53
103 TestFunctional/parallel/NodeLabels 0.07
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.5
107 TestFunctional/parallel/License 0.55
108 TestFunctional/parallel/ServiceCmd/DeployApp 12.24
118 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
119 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
120 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
121 TestFunctional/parallel/Version/short 0.06
122 TestFunctional/parallel/Version/components 0.74
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.34
125 TestFunctional/parallel/ImageCommands/ImageListJson 1.69
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
127 TestFunctional/parallel/ImageCommands/ImageBuild 8.64
128 TestFunctional/parallel/ImageCommands/Setup 1.91
129 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.57
131 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
132 TestFunctional/parallel/ServiceCmd/List 0.4
133 TestFunctional/parallel/ProfileCmd/profile_list 0.4
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.43
135 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
136 TestFunctional/parallel/ServiceCmd/HTTPS 0.46
137 TestFunctional/parallel/MountCmd/any-port 10.31
138 TestFunctional/parallel/ServiceCmd/Format 0.45
139 TestFunctional/parallel/ServiceCmd/URL 0.4
140 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.76
141 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.04
142 TestFunctional/parallel/ImageCommands/ImageRemove 1.23
143 TestFunctional/parallel/MountCmd/specific-port 1.81
145 TestFunctional/parallel/MountCmd/VerifyCleanup 1.69
146 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 8.19
147 TestFunctional/delete_addon-resizer_images 0.07
148 TestFunctional/delete_my-image_image 0.02
149 TestFunctional/delete_minikube_cached_images 0.01
153 TestIngressAddonLegacy/StartLegacyK8sCluster 123.26
155 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 25.01
156 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.58
160 TestJSONOutput/start/Command 62.39
161 TestJSONOutput/start/Audit 0
163 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
166 TestJSONOutput/pause/Command 0.69
167 TestJSONOutput/pause/Audit 0
169 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
170 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/unpause/Command 0.65
173 TestJSONOutput/unpause/Audit 0
175 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/stop/Command 7.1
179 TestJSONOutput/stop/Audit 0
181 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
183 TestErrorJSONOutput 0.23
188 TestMainNoArgs 0.06
189 TestMinikubeProfile 95.74
192 TestMountStart/serial/StartWithMountFirst 28.76
193 TestMountStart/serial/VerifyMountFirst 0.41
194 TestMountStart/serial/StartWithMountSecond 27.9
195 TestMountStart/serial/VerifyMountSecond 0.4
196 TestMountStart/serial/DeleteFirst 0.67
197 TestMountStart/serial/VerifyMountPostDelete 0.41
198 TestMountStart/serial/Stop 1.15
199 TestMountStart/serial/RestartStopped 22.94
200 TestMountStart/serial/VerifyMountPostStop 0.42
203 TestMultiNode/serial/FreshStart2Nodes 110.93
204 TestMultiNode/serial/DeployApp2Nodes 6.41
206 TestMultiNode/serial/AddNode 45.63
207 TestMultiNode/serial/ProfileList 0.21
208 TestMultiNode/serial/CopyFile 7.64
209 TestMultiNode/serial/StopNode 2.24
210 TestMultiNode/serial/StartAfterStop 30.3
212 TestMultiNode/serial/DeleteNode 1.62
214 TestMultiNode/serial/RestartMultiNode 445.51
215 TestMultiNode/serial/ValidateNameConflict 50.1
222 TestScheduledStopUnix 116.74
228 TestKubernetesUpgrade 185.9
231 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
232 TestNoKubernetes/serial/StartWithK8s 105.61
233 TestNoKubernetes/serial/StartWithStopK8s 9.16
234 TestNoKubernetes/serial/Start 28.13
235 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
236 TestNoKubernetes/serial/ProfileList 0.66
237 TestNoKubernetes/serial/Stop 1.22
238 TestNoKubernetes/serial/StartNoArgs 57.09
246 TestNetworkPlugins/group/false 3.81
250 TestStoppedBinaryUpgrade/Setup 1.67
252 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
261 TestPause/serial/Start 69.57
262 TestNetworkPlugins/group/auto/Start 63.27
264 TestNetworkPlugins/group/auto/KubeletFlags 0.22
265 TestNetworkPlugins/group/auto/NetCatPod 11.41
266 TestNetworkPlugins/group/auto/DNS 0.19
267 TestNetworkPlugins/group/auto/Localhost 0.18
268 TestNetworkPlugins/group/auto/HairPin 0.15
269 TestNetworkPlugins/group/kindnet/Start 74.23
270 TestNetworkPlugins/group/calico/Start 110.52
271 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
272 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
273 TestNetworkPlugins/group/kindnet/NetCatPod 12.41
274 TestNetworkPlugins/group/kindnet/DNS 0.19
275 TestNetworkPlugins/group/kindnet/Localhost 0.17
276 TestNetworkPlugins/group/kindnet/HairPin 0.21
277 TestNetworkPlugins/group/custom-flannel/Start 102.69
278 TestNetworkPlugins/group/enable-default-cni/Start 131.06
279 TestStoppedBinaryUpgrade/MinikubeLogs 0.42
280 TestNetworkPlugins/group/flannel/Start 143.08
281 TestNetworkPlugins/group/calico/ControllerPod 5.03
282 TestNetworkPlugins/group/calico/KubeletFlags 0.23
283 TestNetworkPlugins/group/calico/NetCatPod 11.42
284 TestNetworkPlugins/group/calico/DNS 0.21
285 TestNetworkPlugins/group/calico/Localhost 0.16
286 TestNetworkPlugins/group/calico/HairPin 0.18
287 TestNetworkPlugins/group/bridge/Start 103.9
288 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.52
289 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.69
290 TestNetworkPlugins/group/custom-flannel/DNS 0.19
291 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
292 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
294 TestStartStop/group/old-k8s-version/serial/FirstStart 141.91
295 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.27
296 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.4
297 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
298 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
299 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
300 TestNetworkPlugins/group/flannel/ControllerPod 5.03
301 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
302 TestNetworkPlugins/group/bridge/NetCatPod 15.45
303 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
304 TestNetworkPlugins/group/flannel/NetCatPod 15.51
306 TestStartStop/group/no-preload/serial/FirstStart 87.3
307 TestNetworkPlugins/group/bridge/DNS 0.24
308 TestNetworkPlugins/group/bridge/Localhost 0.19
309 TestNetworkPlugins/group/bridge/HairPin 0.19
310 TestNetworkPlugins/group/flannel/DNS 0.23
311 TestNetworkPlugins/group/flannel/Localhost 0.26
312 TestNetworkPlugins/group/flannel/HairPin 0.19
314 TestStartStop/group/embed-certs/serial/FirstStart 68.17
316 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 132.99
317 TestStartStop/group/no-preload/serial/DeployApp 11.52
318 TestStartStop/group/embed-certs/serial/DeployApp 10.53
319 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.27
321 TestStartStop/group/old-k8s-version/serial/DeployApp 9.49
322 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.2
324 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.02
326 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.41
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.17
332 TestStartStop/group/no-preload/serial/SecondStart 698.17
333 TestStartStop/group/embed-certs/serial/SecondStart 623.25
334 TestStartStop/group/old-k8s-version/serial/SecondStart 729.58
336 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 599.15
346 TestStartStop/group/newest-cni/serial/FirstStart 59.46
347 TestStartStop/group/newest-cni/serial/DeployApp 0
348 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.62
349 TestStartStop/group/newest-cni/serial/Stop 10.43
350 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
351 TestStartStop/group/newest-cni/serial/SecondStart 48.52
352 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
353 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
354 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
355 TestStartStop/group/newest-cni/serial/Pause 2.5
x
+
TestDownloadOnly/v1.16.0/json-events (22.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-319582 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-319582 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (22.091138068s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (22.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-319582
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-319582: exit status 85 (75.030711ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-319582 | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:43 UTC |          |
	|         | -p download-only-319582        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|         | --driver=kvm2                  |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/31 23:43:46
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1031 23:43:46.535545   14516 out.go:296] Setting OutFile to fd 1 ...
	I1031 23:43:46.535813   14516 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 23:43:46.535824   14516 out.go:309] Setting ErrFile to fd 2...
	I1031 23:43:46.535831   14516 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 23:43:46.536081   14516 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7305/.minikube/bin
	W1031 23:43:46.536229   14516 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17486-7305/.minikube/config/config.json: open /home/jenkins/minikube-integration/17486-7305/.minikube/config/config.json: no such file or directory
	I1031 23:43:46.536832   14516 out.go:303] Setting JSON to true
	I1031 23:43:46.537679   14516 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1572,"bootTime":1698794255,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 23:43:46.537748   14516 start.go:138] virtualization: kvm guest
	I1031 23:43:46.540323   14516 out.go:97] [download-only-319582] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	W1031 23:43:46.540431   14516 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball: no such file or directory
	I1031 23:43:46.541971   14516 out.go:169] MINIKUBE_LOCATION=17486
	I1031 23:43:46.540524   14516 notify.go:220] Checking for updates...
	I1031 23:43:46.543822   14516 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 23:43:46.545542   14516 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1031 23:43:46.546966   14516 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7305/.minikube
	I1031 23:43:46.548567   14516 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1031 23:43:46.551382   14516 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1031 23:43:46.551589   14516 driver.go:378] Setting default libvirt URI to qemu:///system
	I1031 23:43:46.646011   14516 out.go:97] Using the kvm2 driver based on user configuration
	I1031 23:43:46.646044   14516 start.go:298] selected driver: kvm2
	I1031 23:43:46.646055   14516 start.go:902] validating driver "kvm2" against <nil>
	I1031 23:43:46.646395   14516 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 23:43:46.646520   14516 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17486-7305/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1031 23:43:46.661143   14516 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1031 23:43:46.661209   14516 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1031 23:43:46.661787   14516 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1031 23:43:46.661986   14516 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1031 23:43:46.662065   14516 cni.go:84] Creating CNI manager for ""
	I1031 23:43:46.662081   14516 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 23:43:46.662094   14516 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1031 23:43:46.662103   14516 start_flags.go:323] config:
	{Name:download-only-319582 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-319582 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 23:43:46.662422   14516 iso.go:125] acquiring lock: {Name:mk1f649ca0b7c1ae293cd66cb85f9eeda028b20b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 23:43:46.664473   14516 out.go:97] Downloading VM boot image ...
	I1031 23:43:46.664537   14516 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17486-7305/.minikube/cache/iso/amd64/minikube-v1.32.0-1698773592-17486-amd64.iso
	I1031 23:43:55.457163   14516 out.go:97] Starting control plane node download-only-319582 in cluster download-only-319582
	I1031 23:43:55.457188   14516 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1031 23:43:55.554944   14516 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1031 23:43:55.554969   14516 cache.go:56] Caching tarball of preloaded images
	I1031 23:43:55.555153   14516 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1031 23:43:55.557271   14516 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1031 23:43:55.557299   14516 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1031 23:43:55.660491   14516 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-319582"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/json-events (17.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-319582 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-319582 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (17.560923016s)
--- PASS: TestDownloadOnly/v1.28.3/json-events (17.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/preload-exists
--- PASS: TestDownloadOnly/v1.28.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-319582
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-319582: exit status 85 (76.947308ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-319582 | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:43 UTC |          |
	|         | -p download-only-319582        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|         | --driver=kvm2                  |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	| start   | -o=json --download-only        | download-only-319582 | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:44 UTC |          |
	|         | -p download-only-319582        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.28.3   |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|         | --driver=kvm2                  |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/31 23:44:08
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1031 23:44:08.706889   14608 out.go:296] Setting OutFile to fd 1 ...
	I1031 23:44:08.706996   14608 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 23:44:08.707003   14608 out.go:309] Setting ErrFile to fd 2...
	I1031 23:44:08.707008   14608 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 23:44:08.707193   14608 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7305/.minikube/bin
	W1031 23:44:08.707308   14608 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17486-7305/.minikube/config/config.json: open /home/jenkins/minikube-integration/17486-7305/.minikube/config/config.json: no such file or directory
	I1031 23:44:08.707753   14608 out.go:303] Setting JSON to true
	I1031 23:44:08.708581   14608 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1594,"bootTime":1698794255,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 23:44:08.708645   14608 start.go:138] virtualization: kvm guest
	I1031 23:44:08.711371   14608 out.go:97] [download-only-319582] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1031 23:44:08.713234   14608 out.go:169] MINIKUBE_LOCATION=17486
	I1031 23:44:08.711573   14608 notify.go:220] Checking for updates...
	I1031 23:44:08.716494   14608 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 23:44:08.718199   14608 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1031 23:44:08.720060   14608 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7305/.minikube
	I1031 23:44:08.721663   14608 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1031 23:44:08.724385   14608 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1031 23:44:08.725036   14608 config.go:182] Loaded profile config "download-only-319582": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1031 23:44:08.725094   14608 start.go:810] api.Load failed for download-only-319582: filestore "download-only-319582": Docker machine "download-only-319582" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1031 23:44:08.725202   14608 driver.go:378] Setting default libvirt URI to qemu:///system
	W1031 23:44:08.725245   14608 start.go:810] api.Load failed for download-only-319582: filestore "download-only-319582": Docker machine "download-only-319582" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1031 23:44:08.757803   14608 out.go:97] Using the kvm2 driver based on existing profile
	I1031 23:44:08.757842   14608 start.go:298] selected driver: kvm2
	I1031 23:44:08.757848   14608 start.go:902] validating driver "kvm2" against &{Name:download-only-319582 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-319582 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 23:44:08.758263   14608 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 23:44:08.758336   14608 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17486-7305/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1031 23:44:08.772993   14608 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1031 23:44:08.773741   14608 cni.go:84] Creating CNI manager for ""
	I1031 23:44:08.773760   14608 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 23:44:08.773777   14608 start_flags.go:323] config:
	{Name:download-only-319582 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:download-only-319582 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 23:44:08.773950   14608 iso.go:125] acquiring lock: {Name:mk1f649ca0b7c1ae293cd66cb85f9eeda028b20b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 23:44:08.775748   14608 out.go:97] Starting control plane node download-only-319582 in cluster download-only-319582
	I1031 23:44:08.775765   14608 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1031 23:44:08.873472   14608 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1031 23:44:08.873501   14608 cache.go:56] Caching tarball of preloaded images
	I1031 23:44:08.873660   14608 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1031 23:44:08.875744   14608 out.go:97] Downloading Kubernetes v1.28.3 preload ...
	I1031 23:44:08.875780   14608 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 ...
	I1031 23:44:08.980486   14608 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:6681d82b7b719ef3324102b709ec62eb -> /home/jenkins/minikube-integration/17486-7305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-319582"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-319582
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-737072 --alsologtostderr --binary-mirror http://127.0.0.1:42933 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-737072" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-737072
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (132.02s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-316115 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-316115 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (2m11.17515593s)
helpers_test.go:175: Cleaning up "offline-crio-316115" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-316115
--- PASS: TestOffline (132.02s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-798361
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-798361: exit status 85 (70.250102ms)

                                                
                                                
-- stdout --
	* Profile "addons-798361" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-798361"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-798361
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-798361: exit status 85 (69.261951ms)

                                                
                                                
-- stdout --
	* Profile "addons-798361" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-798361"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (215.11s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-798361 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-798361 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m35.111300065s)
--- PASS: TestAddons/Setup (215.11s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 29.932812ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-z954q" [bdbe9b30-2dde-43e5-a3b9-d5747f4c16ab] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.018873806s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-b44rj" [6381e896-06e3-4249-96d2-436fd28a088d] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.016024951s
addons_test.go:339: (dbg) Run:  kubectl --context addons-798361 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-798361 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-798361 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.874777447s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-798361 ip
2023/10/31 23:48:18 [DEBUG] GET http://192.168.39.214:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-798361 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.01s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.26s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-rwgjj" [20cd3fc0-dc9c-4a7c-8b53-b959db89ee44] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.042061301s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-798361
addons_test.go:840: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-798361: (6.218745827s)
--- PASS: TestAddons/parallel/InspektorGadget (11.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.19s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 29.968604ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-zzhks" [d8225a7f-92d4-400f-83e4-12260eae77aa] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.021570365s
addons_test.go:414: (dbg) Run:  kubectl --context addons-798361 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-798361 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:431: (dbg) Done: out/minikube-linux-amd64 -p addons-798361 addons disable metrics-server --alsologtostderr -v=1: (1.061689479s)
--- PASS: TestAddons/parallel/MetricsServer (6.19s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (19.29s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 6.671466ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-m2w9s" [67f715dc-230b-49dc-8a07-bd8b3586a4cf] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.01884624s
addons_test.go:472: (dbg) Run:  kubectl --context addons-798361 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-798361 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (13.602860027s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-798361 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (19.29s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.85s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 30.43463ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-798361 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798361 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-798361 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [70091f34-1c8c-44a5-9192-3f3370f18f92] Pending
helpers_test.go:344: "task-pv-pod" [70091f34-1c8c-44a5-9192-3f3370f18f92] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [70091f34-1c8c-44a5-9192-3f3370f18f92] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 18.017491443s
addons_test.go:583: (dbg) Run:  kubectl --context addons-798361 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-798361 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-798361 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-798361 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-798361 delete pod task-pv-pod
addons_test.go:593: (dbg) Done: kubectl --context addons-798361 delete pod task-pv-pod: (1.707193908s)
addons_test.go:599: (dbg) Run:  kubectl --context addons-798361 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-798361 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798361 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798361 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798361 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-798361 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [9ded602a-e0e3-4d30-a4ca-e179766fba67] Pending
helpers_test.go:344: "task-pv-pod-restore" [9ded602a-e0e3-4d30-a4ca-e179766fba67] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [9ded602a-e0e3-4d30-a4ca-e179766fba67] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 11.042294889s
addons_test.go:625: (dbg) Run:  kubectl --context addons-798361 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-798361 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-798361 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-798361 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-798361 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.577796926s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-798361 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (53.85s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-798361 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-798361 --alsologtostderr -v=1: (1.33021797s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-94b766c-fzcp5" [d63e52a3-c7fc-4035-b867-099157e15969] Pending
helpers_test.go:344: "headlamp-94b766c-fzcp5" [d63e52a3-c7fc-4035-b867-099157e15969] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-94b766c-fzcp5" [d63e52a3-c7fc-4035-b867-099157e15969] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.023771279s
--- PASS: TestAddons/parallel/Headlamp (15.36s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.73s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-56665cdfc-tx8vg" [2d936749-6354-4630-9748-b97718e0e1c0] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.023260548s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-798361
--- PASS: TestAddons/parallel/CloudSpanner (5.73s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (23.51s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-798361 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-798361 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798361 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798361 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798361 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798361 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798361 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798361 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798361 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798361 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798361 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798361 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798361 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798361 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798361 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798361 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [4efc5ba3-eba2-4e1e-87b5-e42150a5df1f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [4efc5ba3-eba2-4e1e-87b5-e42150a5df1f] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [4efc5ba3-eba2-4e1e-87b5-e42150a5df1f] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 10.011722235s
addons_test.go:890: (dbg) Run:  kubectl --context addons-798361 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-798361 ssh "cat /opt/local-path-provisioner/pvc-e5f2c674-b386-4f07-bc7b-156e081994b8_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-798361 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-798361 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-798361 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (23.51s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.86s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-lvpgt" [9c45c998-76b8-4253-9b15-cd3a9d7756be] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.017259538s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-798361
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.86s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-798361 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-798361 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (105.03s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-406160 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-406160 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m43.445297824s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-406160 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-406160 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-406160 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-406160" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-406160
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-406160: (1.070606639s)
--- PASS: TestCertOptions (105.03s)

                                                
                                    
x
+
TestCertExpiration (286.53s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-902201 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-902201 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (51.860310103s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-902201 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-902201 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (53.595451248s)
helpers_test.go:175: Cleaning up "cert-expiration-902201" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-902201
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-902201: (1.070655326s)
--- PASS: TestCertExpiration (286.53s)

                                                
                                    
x
+
TestForceSystemdFlag (75.38s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-644407 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-644407 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m14.11007699s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-644407 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-644407" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-644407
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-644407: (1.042772788s)
--- PASS: TestForceSystemdFlag (75.38s)

                                                
                                    
x
+
TestForceSystemdEnv (67.18s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-256488 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1101 00:42:16.007074   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/functional-736766/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-256488 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m6.124256386s)
helpers_test.go:175: Cleaning up "force-systemd-env-256488" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-256488
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-256488: (1.053107627s)
--- PASS: TestForceSystemdEnv (67.18s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.23s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.23s)

                                                
                                    
x
+
TestErrorSpam/setup (45.34s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-815186 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-815186 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-815186 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-815186 --driver=kvm2  --container-runtime=crio: (45.344193665s)
--- PASS: TestErrorSpam/setup (45.34s)

                                                
                                    
x
+
TestErrorSpam/start (0.4s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-815186 --log_dir /tmp/nospam-815186 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-815186 --log_dir /tmp/nospam-815186 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-815186 --log_dir /tmp/nospam-815186 start --dry-run
--- PASS: TestErrorSpam/start (0.40s)

                                                
                                    
x
+
TestErrorSpam/status (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-815186 --log_dir /tmp/nospam-815186 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-815186 --log_dir /tmp/nospam-815186 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-815186 --log_dir /tmp/nospam-815186 status
--- PASS: TestErrorSpam/status (0.81s)

                                                
                                    
x
+
TestErrorSpam/pause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-815186 --log_dir /tmp/nospam-815186 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-815186 --log_dir /tmp/nospam-815186 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-815186 --log_dir /tmp/nospam-815186 pause
--- PASS: TestErrorSpam/pause (1.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-815186 --log_dir /tmp/nospam-815186 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-815186 --log_dir /tmp/nospam-815186 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-815186 --log_dir /tmp/nospam-815186 unpause
--- PASS: TestErrorSpam/unpause (1.73s)

                                                
                                    
x
+
TestErrorSpam/stop (2.27s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-815186 --log_dir /tmp/nospam-815186 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-815186 --log_dir /tmp/nospam-815186 stop: (2.098397655s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-815186 --log_dir /tmp/nospam-815186 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-815186 --log_dir /tmp/nospam-815186 stop
--- PASS: TestErrorSpam/stop (2.27s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17486-7305/.minikube/files/etc/test/nested/copy/14504/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (61.22s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-736766 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-736766 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m1.220282487s)
--- PASS: TestFunctional/serial/StartWithProxy (61.22s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.78s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-736766 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-736766 --alsologtostderr -v=8: (34.777031292s)
functional_test.go:659: soft start took 34.77767221s for "functional-736766" cluster.
--- PASS: TestFunctional/serial/SoftStart (34.78s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-736766 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-736766 cache add registry.k8s.io/pause:3.1: (1.223351517s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-736766 cache add registry.k8s.io/pause:3.3: (1.150799292s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-736766 cache add registry.k8s.io/pause:latest: (1.16133303s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-736766 /tmp/TestFunctionalserialCacheCmdcacheadd_local3090347714/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 cache add minikube-local-cache-test:functional-736766
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-736766 cache add minikube-local-cache-test:functional-736766: (1.84321953s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 cache delete minikube-local-cache-test:functional-736766
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-736766
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-736766 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (240.830911ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.74s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 kubectl -- --context functional-736766 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-736766 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.45s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-736766 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-736766 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.446051131s)
functional_test.go:757: restart took 34.446182309s for "functional-736766" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (34.45s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-736766 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-736766 logs: (1.481343463s)
--- PASS: TestFunctional/serial/LogsCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 logs --file /tmp/TestFunctionalserialLogsFileCmd1387233065/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-736766 logs --file /tmp/TestFunctionalserialLogsFileCmd1387233065/001/logs.txt: (1.458678782s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.1s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-736766 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-736766
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-736766: exit status 115 (301.492163ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.189:32591 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-736766 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.10s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-736766 config get cpus: exit status 14 (85.673475ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-736766 config get cpus: exit status 14 (56.420103ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (22.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-736766 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-736766 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 22307: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (22.06s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-736766 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-736766 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (175.773696ms)

                                                
                                                
-- stdout --
	* [functional-736766] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17486
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17486-7305/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7305/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1031 23:57:30.623291   21403 out.go:296] Setting OutFile to fd 1 ...
	I1031 23:57:30.624581   21403 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 23:57:30.624666   21403 out.go:309] Setting ErrFile to fd 2...
	I1031 23:57:30.624678   21403 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 23:57:30.624922   21403 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7305/.minikube/bin
	I1031 23:57:30.625526   21403 out.go:303] Setting JSON to false
	I1031 23:57:30.626411   21403 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2396,"bootTime":1698794255,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 23:57:30.626485   21403 start.go:138] virtualization: kvm guest
	I1031 23:57:30.628830   21403 out.go:177] * [functional-736766] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1031 23:57:30.630593   21403 out.go:177]   - MINIKUBE_LOCATION=17486
	I1031 23:57:30.630586   21403 notify.go:220] Checking for updates...
	I1031 23:57:30.632189   21403 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 23:57:30.633628   21403 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1031 23:57:30.635053   21403 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7305/.minikube
	I1031 23:57:30.636378   21403 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 23:57:30.637857   21403 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1031 23:57:30.639801   21403 config.go:182] Loaded profile config "functional-736766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 23:57:30.640519   21403 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:57:30.640600   21403 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:57:30.656008   21403 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40767
	I1031 23:57:30.656336   21403 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:57:30.656812   21403 main.go:141] libmachine: Using API Version  1
	I1031 23:57:30.656861   21403 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:57:30.657195   21403 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:57:30.657388   21403 main.go:141] libmachine: (functional-736766) Calling .DriverName
	I1031 23:57:30.657641   21403 driver.go:378] Setting default libvirt URI to qemu:///system
	I1031 23:57:30.657942   21403 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:57:30.657982   21403 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:57:30.673282   21403 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34263
	I1031 23:57:30.673756   21403 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:57:30.674227   21403 main.go:141] libmachine: Using API Version  1
	I1031 23:57:30.674244   21403 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:57:30.674580   21403 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:57:30.674780   21403 main.go:141] libmachine: (functional-736766) Calling .DriverName
	I1031 23:57:30.709808   21403 out.go:177] * Using the kvm2 driver based on existing profile
	I1031 23:57:30.711424   21403 start.go:298] selected driver: kvm2
	I1031 23:57:30.711439   21403 start.go:902] validating driver "kvm2" against &{Name:functional-736766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.3 ClusterName:functional-736766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.189 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 23:57:30.711603   21403 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 23:57:30.713862   21403 out.go:177] 
	W1031 23:57:30.715626   21403 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1031 23:57:30.717282   21403 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-736766 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-736766 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-736766 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (167.315093ms)

                                                
                                                
-- stdout --
	* [functional-736766] minikube v1.32.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17486
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17486-7305/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7305/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1031 23:57:30.950648   21485 out.go:296] Setting OutFile to fd 1 ...
	I1031 23:57:30.950834   21485 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 23:57:30.950847   21485 out.go:309] Setting ErrFile to fd 2...
	I1031 23:57:30.950855   21485 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 23:57:30.951364   21485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7305/.minikube/bin
	I1031 23:57:30.952239   21485 out.go:303] Setting JSON to false
	I1031 23:57:30.953678   21485 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2396,"bootTime":1698794255,"procs":236,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 23:57:30.953776   21485 start.go:138] virtualization: kvm guest
	I1031 23:57:30.956405   21485 out.go:177] * [functional-736766] minikube v1.32.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	I1031 23:57:30.958145   21485 out.go:177]   - MINIKUBE_LOCATION=17486
	I1031 23:57:30.959565   21485 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 23:57:30.958171   21485 notify.go:220] Checking for updates...
	I1031 23:57:30.961246   21485 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1031 23:57:30.962864   21485 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7305/.minikube
	I1031 23:57:30.964504   21485 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 23:57:30.966214   21485 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1031 23:57:30.968277   21485 config.go:182] Loaded profile config "functional-736766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 23:57:30.968670   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:57:30.968723   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:57:30.984411   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33847
	I1031 23:57:30.984838   21485 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:57:30.985422   21485 main.go:141] libmachine: Using API Version  1
	I1031 23:57:30.985442   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:57:30.985775   21485 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:57:30.985959   21485 main.go:141] libmachine: (functional-736766) Calling .DriverName
	I1031 23:57:30.986226   21485 driver.go:378] Setting default libvirt URI to qemu:///system
	I1031 23:57:30.986515   21485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 23:57:30.986560   21485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:57:31.001947   21485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36505
	I1031 23:57:31.002420   21485 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:57:31.002847   21485 main.go:141] libmachine: Using API Version  1
	I1031 23:57:31.002870   21485 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:57:31.003182   21485 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:57:31.003364   21485 main.go:141] libmachine: (functional-736766) Calling .DriverName
	I1031 23:57:31.037301   21485 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1031 23:57:31.038973   21485 start.go:298] selected driver: kvm2
	I1031 23:57:31.038989   21485 start.go:902] validating driver "kvm2" against &{Name:functional-736766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.3 ClusterName:functional-736766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.189 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 23:57:31.039107   21485 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 23:57:31.041939   21485 out.go:177] 
	W1031 23:57:31.043632   21485 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1031 23:57:31.045077   21485 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-736766 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-736766 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-68242" [f7d496e0-63e9-48da-8cbb-3c334c358ef2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-68242" [f7d496e0-63e9-48da-8cbb-3c334c358ef2] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.024472266s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.50.189:30669
functional_test.go:1674: http://192.168.50.189:30669: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-68242

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.189:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.189:30669
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.69s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (45.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [f29b9c02-5315-493e-8ba5-72c420287eb4] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.040535496s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-736766 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-736766 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-736766 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-736766 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c33675d9-40c5-4ab0-a64b-328998c3721a] Pending
helpers_test.go:344: "sp-pod" [c33675d9-40c5-4ab0-a64b-328998c3721a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c33675d9-40c5-4ab0-a64b-328998c3721a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.023721476s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-736766 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-736766 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-736766 delete -f testdata/storage-provisioner/pod.yaml: (2.151163129s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-736766 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2739dc8b-14c5-48b5-984c-1ffac0ebf6b6] Pending
helpers_test.go:344: "sp-pod" [2739dc8b-14c5-48b5-984c-1ffac0ebf6b6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2739dc8b-14c5-48b5-984c-1ffac0ebf6b6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.038234237s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-736766 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.41s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 ssh -n functional-736766 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 cp functional-736766:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4107681530/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 ssh -n functional-736766 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (31.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-736766 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-cd5wr" [a4bd0c52-f805-4408-87d9-6c2933b1a073] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-cd5wr" [a4bd0c52-f805-4408-87d9-6c2933b1a073] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 29.017029813s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-736766 exec mysql-859648c796-cd5wr -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-736766 exec mysql-859648c796-cd5wr -- mysql -ppassword -e "show databases;": exit status 1 (202.315431ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-736766 exec mysql-859648c796-cd5wr -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-736766 exec mysql-859648c796-cd5wr -- mysql -ppassword -e "show databases;": exit status 1 (318.281788ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-736766 exec mysql-859648c796-cd5wr -- mysql -ppassword -e "show databases;"
E1031 23:58:02.504884   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt: no such file or directory
E1031 23:58:02.511001   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt: no such file or directory
E1031 23:58:02.521340   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt: no such file or directory
E1031 23:58:02.541673   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt: no such file or directory
E1031 23:58:02.581994   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt: no such file or directory
E1031 23:58:02.662409   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt: no such file or directory
E1031 23:58:02.822828   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/MySQL (31.86s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/14504/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 ssh "sudo cat /etc/test/nested/copy/14504/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/14504.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 ssh "sudo cat /etc/ssl/certs/14504.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/14504.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 ssh "sudo cat /usr/share/ca-certificates/14504.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/145042.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 ssh "sudo cat /etc/ssl/certs/145042.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/145042.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 ssh "sudo cat /usr/share/ca-certificates/145042.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-736766 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-736766 ssh "sudo systemctl is-active docker": exit status 1 (242.07922ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-736766 ssh "sudo systemctl is-active containerd": exit status 1 (261.086291ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-736766 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-736766 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-h8wgj" [dd5b43e7-f49d-42ae-999d-6400b22ac2df] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-h8wgj" [dd5b43e7-f49d-42ae-999d-6400b22ac2df] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.01993382s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-736766 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-736766
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-736766
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-736766 image ls --format short --alsologtostderr:
I1031 23:57:54.655683   22459 out.go:296] Setting OutFile to fd 1 ...
I1031 23:57:54.655957   22459 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 23:57:54.655967   22459 out.go:309] Setting ErrFile to fd 2...
I1031 23:57:54.655975   22459 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 23:57:54.656170   22459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7305/.minikube/bin
I1031 23:57:54.656736   22459 config.go:182] Loaded profile config "functional-736766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1031 23:57:54.656842   22459 config.go:182] Loaded profile config "functional-736766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1031 23:57:54.657204   22459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1031 23:57:54.657260   22459 main.go:141] libmachine: Launching plugin server for driver kvm2
I1031 23:57:54.671391   22459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43815
I1031 23:57:54.671854   22459 main.go:141] libmachine: () Calling .GetVersion
I1031 23:57:54.672418   22459 main.go:141] libmachine: Using API Version  1
I1031 23:57:54.672442   22459 main.go:141] libmachine: () Calling .SetConfigRaw
I1031 23:57:54.672754   22459 main.go:141] libmachine: () Calling .GetMachineName
I1031 23:57:54.672956   22459 main.go:141] libmachine: (functional-736766) Calling .GetState
I1031 23:57:54.674799   22459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1031 23:57:54.674831   22459 main.go:141] libmachine: Launching plugin server for driver kvm2
I1031 23:57:54.689552   22459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34929
I1031 23:57:54.689983   22459 main.go:141] libmachine: () Calling .GetVersion
I1031 23:57:54.690516   22459 main.go:141] libmachine: Using API Version  1
I1031 23:57:54.690541   22459 main.go:141] libmachine: () Calling .SetConfigRaw
I1031 23:57:54.690876   22459 main.go:141] libmachine: () Calling .GetMachineName
I1031 23:57:54.691050   22459 main.go:141] libmachine: (functional-736766) Calling .DriverName
I1031 23:57:54.691293   22459 ssh_runner.go:195] Run: systemctl --version
I1031 23:57:54.691326   22459 main.go:141] libmachine: (functional-736766) Calling .GetSSHHostname
I1031 23:57:54.694502   22459 main.go:141] libmachine: (functional-736766) DBG | domain functional-736766 has defined MAC address 52:54:00:7c:ed:1f in network mk-functional-736766
I1031 23:57:54.694905   22459 main.go:141] libmachine: (functional-736766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ed:1f", ip: ""} in network mk-functional-736766: {Iface:virbr1 ExpiryTime:2023-11-01 00:55:05 +0000 UTC Type:0 Mac:52:54:00:7c:ed:1f Iaid: IPaddr:192.168.50.189 Prefix:24 Hostname:functional-736766 Clientid:01:52:54:00:7c:ed:1f}
I1031 23:57:54.694938   22459 main.go:141] libmachine: (functional-736766) DBG | domain functional-736766 has defined IP address 192.168.50.189 and MAC address 52:54:00:7c:ed:1f in network mk-functional-736766
I1031 23:57:54.695105   22459 main.go:141] libmachine: (functional-736766) Calling .GetSSHPort
I1031 23:57:54.695279   22459 main.go:141] libmachine: (functional-736766) Calling .GetSSHKeyPath
I1031 23:57:54.695439   22459 main.go:141] libmachine: (functional-736766) Calling .GetSSHUsername
I1031 23:57:54.695582   22459 sshutil.go:53] new ssh client: &{IP:192.168.50.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/functional-736766/id_rsa Username:docker}
I1031 23:57:54.782930   22459 ssh_runner.go:195] Run: sudo crictl images --output json
I1031 23:57:54.840194   22459 main.go:141] libmachine: Making call to close driver server
I1031 23:57:54.840211   22459 main.go:141] libmachine: (functional-736766) Calling .Close
I1031 23:57:54.840458   22459 main.go:141] libmachine: Successfully made call to close driver server
I1031 23:57:54.840477   22459 main.go:141] libmachine: Making call to close connection to plugin binary
I1031 23:57:54.840492   22459 main.go:141] libmachine: Making call to close driver server
I1031 23:57:54.840500   22459 main.go:141] libmachine: (functional-736766) Calling .Close
I1031 23:57:54.840714   22459 main.go:141] libmachine: Successfully made call to close driver server
I1031 23:57:54.840757   22459 main.go:141] libmachine: Making call to close connection to plugin binary
I1031 23:57:54.840761   22459 main.go:141] libmachine: (functional-736766) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-736766 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | latest             | 593aee2afb642 | 191MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/kube-apiserver          | v1.28.3            | 5374347291230 | 127MB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-736766  | 6d134a8d34660 | 3.35kB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/library/mysql                 | 5.7                | 547b3c3c15a96 | 520MB  |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/kube-controller-manager | v1.28.3            | 10baa1ca17068 | 123MB  |
| registry.k8s.io/kube-scheduler          | v1.28.3            | 6d1b4fd1b182d | 61.5MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/kube-proxy              | v1.28.3            | bfc896cf80fba | 74.7MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| gcr.io/google-containers/addon-resizer  | functional-736766  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-736766 image ls --format table --alsologtostderr:
I1031 23:58:02.960955   22668 out.go:296] Setting OutFile to fd 1 ...
I1031 23:58:02.961114   22668 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 23:58:02.961125   22668 out.go:309] Setting ErrFile to fd 2...
I1031 23:58:02.961132   22668 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 23:58:02.961353   22668 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7305/.minikube/bin
I1031 23:58:02.961965   22668 config.go:182] Loaded profile config "functional-736766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1031 23:58:02.962092   22668 config.go:182] Loaded profile config "functional-736766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1031 23:58:02.962474   22668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1031 23:58:02.962533   22668 main.go:141] libmachine: Launching plugin server for driver kvm2
I1031 23:58:02.977111   22668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36951
I1031 23:58:02.977565   22668 main.go:141] libmachine: () Calling .GetVersion
I1031 23:58:02.978121   22668 main.go:141] libmachine: Using API Version  1
I1031 23:58:02.978139   22668 main.go:141] libmachine: () Calling .SetConfigRaw
I1031 23:58:02.978501   22668 main.go:141] libmachine: () Calling .GetMachineName
I1031 23:58:02.978695   22668 main.go:141] libmachine: (functional-736766) Calling .GetState
I1031 23:58:02.980627   22668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1031 23:58:02.980671   22668 main.go:141] libmachine: Launching plugin server for driver kvm2
I1031 23:58:02.995425   22668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42791
I1031 23:58:02.995800   22668 main.go:141] libmachine: () Calling .GetVersion
I1031 23:58:02.996275   22668 main.go:141] libmachine: Using API Version  1
I1031 23:58:02.996291   22668 main.go:141] libmachine: () Calling .SetConfigRaw
I1031 23:58:02.996647   22668 main.go:141] libmachine: () Calling .GetMachineName
I1031 23:58:02.996823   22668 main.go:141] libmachine: (functional-736766) Calling .DriverName
I1031 23:58:02.997020   22668 ssh_runner.go:195] Run: systemctl --version
I1031 23:58:02.997045   22668 main.go:141] libmachine: (functional-736766) Calling .GetSSHHostname
I1031 23:58:02.999826   22668 main.go:141] libmachine: (functional-736766) DBG | domain functional-736766 has defined MAC address 52:54:00:7c:ed:1f in network mk-functional-736766
I1031 23:58:03.000328   22668 main.go:141] libmachine: (functional-736766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ed:1f", ip: ""} in network mk-functional-736766: {Iface:virbr1 ExpiryTime:2023-11-01 00:55:05 +0000 UTC Type:0 Mac:52:54:00:7c:ed:1f Iaid: IPaddr:192.168.50.189 Prefix:24 Hostname:functional-736766 Clientid:01:52:54:00:7c:ed:1f}
I1031 23:58:03.000359   22668 main.go:141] libmachine: (functional-736766) DBG | domain functional-736766 has defined IP address 192.168.50.189 and MAC address 52:54:00:7c:ed:1f in network mk-functional-736766
I1031 23:58:03.000530   22668 main.go:141] libmachine: (functional-736766) Calling .GetSSHPort
I1031 23:58:03.000708   22668 main.go:141] libmachine: (functional-736766) Calling .GetSSHKeyPath
I1031 23:58:03.000853   22668 main.go:141] libmachine: (functional-736766) Calling .GetSSHUsername
I1031 23:58:03.001002   22668 sshutil.go:53] new ssh client: &{IP:192.168.50.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/functional-736766/id_rsa Username:docker}
I1031 23:58:03.120315   22668 ssh_runner.go:195] Run: sudo crictl images --output json
I1031 23:58:03.236498   22668 main.go:141] libmachine: Making call to close driver server
I1031 23:58:03.236543   22668 main.go:141] libmachine: (functional-736766) Calling .Close
I1031 23:58:03.236787   22668 main.go:141] libmachine: Successfully made call to close driver server
I1031 23:58:03.236805   22668 main.go:141] libmachine: Making call to close connection to plugin binary
I1031 23:58:03.236822   22668 main.go:141] libmachine: (functional-736766) DBG | Closing plugin on server side
I1031 23:58:03.236841   22668 main.go:141] libmachine: Making call to close driver server
I1031 23:58:03.236852   22668 main.go:141] libmachine: (functional-736766) Calling .Close
I1031 23:58:03.237089   22668 main.go:141] libmachine: Successfully made call to close driver server
I1031 23:58:03.237110   22668 main.go:141] libmachine: Making call to close connection to plugin binary
I1031 23:58:03.237179   22668 main.go:141] libmachine: (functional-736766) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-736766 image ls --format json --alsologtostderr: (1.686043078s)
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-736766 image ls --format json --alsologtostderr:
[{"id":"2590a0ac1799e7477ecf90562dcfb45f9c5478eaf5701d59a07a390f8702eabf","repoDigests":["docker.io/library/8c09091ac9fe5853810b9d97ed83f995db8e7b41550226bc91a41ecf99d0b187-tmp@sha256:ebf13bec562a7aa0fb582c379c91ad2f47bbee04eb521b3408d3957f2c525c02"],"repoTags":[],"size":"1466017"},{"id":"6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725","registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.3"],"size":"61498678"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcd
deffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"547b3c3c15a9698ee368530b251e6baa66807c64742355e6724ba59b4d3ec8a6","repoDigests":["docker.io/library/mysql@sha256:444e015ba2ad9fc0884a82cef6c3b15f89db003aef11b55e4daca24f55538cb9","docker.io/library/mysql@sha256:880063e8acda81825f0b946eff47c45235840480da03e71a22113ebafe166a3d"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519576537"},{"id":"10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3","repoDig
ests":["registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707","registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.3"],"size":"123188534"},{"id":"bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf","repoDigests":["registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8","registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.3"],"size":"74691991"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-736766"],"size":"34114467"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a
3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6d134a8d34660940d5264363e93c626b63e27786c475eff47542e3da953bcc22","repoDigests":["localhost/minikube-local-cache-test@sha256:a9814080624fb8acfdba55867c762fc66e6907aded774613411fd3333d9492ac"],"repoTags":["localhost/minikube-local-cache-test:functional-736766"],"size":"3345"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDiges
ts":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076","repoDigests":["registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab","registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.3"],"size":"127165392"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"68
6139"},{"id":"593aee2afb642798b83a85306d2625fd7f089c0a1242c7e75a237846d80aa2a0","repoDigests":["docker.io/library/nginx@sha256:0d60ba9498d4491525334696a736b4c19b56231b972061fab2f536d48ebfd7ce","docker.io/library/nginx@sha256:add4792d930c25dd2abf2ef9ea79de578097a1c175a16ab25814332fe33622de"],"repoTags":["docker.io/library/nginx:latest"],"size":"190960382"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4e
bf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-736766 image ls --format json --alsologtostderr:
I1031 23:58:01.461295   22635 out.go:296] Setting OutFile to fd 1 ...
I1031 23:58:01.461481   22635 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 23:58:01.461491   22635 out.go:309] Setting ErrFile to fd 2...
I1031 23:58:01.461496   22635 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 23:58:01.461765   22635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7305/.minikube/bin
I1031 23:58:01.462958   22635 config.go:182] Loaded profile config "functional-736766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1031 23:58:01.463157   22635 config.go:182] Loaded profile config "functional-736766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1031 23:58:01.463726   22635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1031 23:58:01.463805   22635 main.go:141] libmachine: Launching plugin server for driver kvm2
I1031 23:58:01.477789   22635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41601
I1031 23:58:01.478238   22635 main.go:141] libmachine: () Calling .GetVersion
I1031 23:58:01.478776   22635 main.go:141] libmachine: Using API Version  1
I1031 23:58:01.478804   22635 main.go:141] libmachine: () Calling .SetConfigRaw
I1031 23:58:01.479151   22635 main.go:141] libmachine: () Calling .GetMachineName
I1031 23:58:01.479311   22635 main.go:141] libmachine: (functional-736766) Calling .GetState
I1031 23:58:01.481113   22635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1031 23:58:01.481152   22635 main.go:141] libmachine: Launching plugin server for driver kvm2
I1031 23:58:01.495463   22635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38879
I1031 23:58:01.495890   22635 main.go:141] libmachine: () Calling .GetVersion
I1031 23:58:01.496395   22635 main.go:141] libmachine: Using API Version  1
I1031 23:58:01.496418   22635 main.go:141] libmachine: () Calling .SetConfigRaw
I1031 23:58:01.496774   22635 main.go:141] libmachine: () Calling .GetMachineName
I1031 23:58:01.496930   22635 main.go:141] libmachine: (functional-736766) Calling .DriverName
I1031 23:58:01.497140   22635 ssh_runner.go:195] Run: systemctl --version
I1031 23:58:01.497160   22635 main.go:141] libmachine: (functional-736766) Calling .GetSSHHostname
I1031 23:58:01.500279   22635 main.go:141] libmachine: (functional-736766) DBG | domain functional-736766 has defined MAC address 52:54:00:7c:ed:1f in network mk-functional-736766
I1031 23:58:01.500662   22635 main.go:141] libmachine: (functional-736766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ed:1f", ip: ""} in network mk-functional-736766: {Iface:virbr1 ExpiryTime:2023-11-01 00:55:05 +0000 UTC Type:0 Mac:52:54:00:7c:ed:1f Iaid: IPaddr:192.168.50.189 Prefix:24 Hostname:functional-736766 Clientid:01:52:54:00:7c:ed:1f}
I1031 23:58:01.500691   22635 main.go:141] libmachine: (functional-736766) DBG | domain functional-736766 has defined IP address 192.168.50.189 and MAC address 52:54:00:7c:ed:1f in network mk-functional-736766
I1031 23:58:01.500902   22635 main.go:141] libmachine: (functional-736766) Calling .GetSSHPort
I1031 23:58:01.501070   22635 main.go:141] libmachine: (functional-736766) Calling .GetSSHKeyPath
I1031 23:58:01.501217   22635 main.go:141] libmachine: (functional-736766) Calling .GetSSHUsername
I1031 23:58:01.501361   22635 sshutil.go:53] new ssh client: &{IP:192.168.50.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/functional-736766/id_rsa Username:docker}
I1031 23:58:01.620182   22635 ssh_runner.go:195] Run: sudo crictl images --output json
I1031 23:58:03.088563   22635 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.468346462s)
I1031 23:58:03.088959   22635 main.go:141] libmachine: Making call to close driver server
I1031 23:58:03.088970   22635 main.go:141] libmachine: (functional-736766) Calling .Close
I1031 23:58:03.089305   22635 main.go:141] libmachine: (functional-736766) DBG | Closing plugin on server side
I1031 23:58:03.089330   22635 main.go:141] libmachine: Successfully made call to close driver server
I1031 23:58:03.089346   22635 main.go:141] libmachine: Making call to close connection to plugin binary
I1031 23:58:03.089364   22635 main.go:141] libmachine: Making call to close driver server
I1031 23:58:03.089376   22635 main.go:141] libmachine: (functional-736766) Calling .Close
I1031 23:58:03.089613   22635 main.go:141] libmachine: (functional-736766) DBG | Closing plugin on server side
I1031 23:58:03.089661   22635 main.go:141] libmachine: Successfully made call to close driver server
I1031 23:58:03.089674   22635 main.go:141] libmachine: Making call to close connection to plugin binary
E1031 23:58:03.143613   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-736766 image ls --format yaml --alsologtostderr:
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 6d134a8d34660940d5264363e93c626b63e27786c475eff47542e3da953bcc22
repoDigests:
- localhost/minikube-local-cache-test@sha256:a9814080624fb8acfdba55867c762fc66e6907aded774613411fd3333d9492ac
repoTags:
- localhost/minikube-local-cache-test:functional-736766
size: "3345"
- id: bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf
repoDigests:
- registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8
- registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072
repoTags:
- registry.k8s.io/kube-proxy:v1.28.3
size: "74691991"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab
- registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.3
size: "127165392"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 547b3c3c15a9698ee368530b251e6baa66807c64742355e6724ba59b4d3ec8a6
repoDigests:
- docker.io/library/mysql@sha256:444e015ba2ad9fc0884a82cef6c3b15f89db003aef11b55e4daca24f55538cb9
- docker.io/library/mysql@sha256:880063e8acda81825f0b946eff47c45235840480da03e71a22113ebafe166a3d
repoTags:
- docker.io/library/mysql:5.7
size: "519576537"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725
- registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.3
size: "61498678"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 593aee2afb642798b83a85306d2625fd7f089c0a1242c7e75a237846d80aa2a0
repoDigests:
- docker.io/library/nginx@sha256:0d60ba9498d4491525334696a736b4c19b56231b972061fab2f536d48ebfd7ce
- docker.io/library/nginx@sha256:add4792d930c25dd2abf2ef9ea79de578097a1c175a16ab25814332fe33622de
repoTags:
- docker.io/library/nginx:latest
size: "190960382"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-736766
size: "34114467"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707
- registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.3
size: "123188534"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-736766 image ls --format yaml --alsologtostderr:
I1031 23:57:54.901092   22483 out.go:296] Setting OutFile to fd 1 ...
I1031 23:57:54.901233   22483 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 23:57:54.901245   22483 out.go:309] Setting ErrFile to fd 2...
I1031 23:57:54.901253   22483 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 23:57:54.901473   22483 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7305/.minikube/bin
I1031 23:57:54.902076   22483 config.go:182] Loaded profile config "functional-736766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1031 23:57:54.902200   22483 config.go:182] Loaded profile config "functional-736766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1031 23:57:54.902607   22483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1031 23:57:54.902678   22483 main.go:141] libmachine: Launching plugin server for driver kvm2
I1031 23:57:54.916817   22483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43413
I1031 23:57:54.917259   22483 main.go:141] libmachine: () Calling .GetVersion
I1031 23:57:54.917848   22483 main.go:141] libmachine: Using API Version  1
I1031 23:57:54.917881   22483 main.go:141] libmachine: () Calling .SetConfigRaw
I1031 23:57:54.918186   22483 main.go:141] libmachine: () Calling .GetMachineName
I1031 23:57:54.918455   22483 main.go:141] libmachine: (functional-736766) Calling .GetState
I1031 23:57:54.920226   22483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1031 23:57:54.920268   22483 main.go:141] libmachine: Launching plugin server for driver kvm2
I1031 23:57:54.934380   22483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37225
I1031 23:57:54.934797   22483 main.go:141] libmachine: () Calling .GetVersion
I1031 23:57:54.935339   22483 main.go:141] libmachine: Using API Version  1
I1031 23:57:54.935374   22483 main.go:141] libmachine: () Calling .SetConfigRaw
I1031 23:57:54.935718   22483 main.go:141] libmachine: () Calling .GetMachineName
I1031 23:57:54.935948   22483 main.go:141] libmachine: (functional-736766) Calling .DriverName
I1031 23:57:54.936232   22483 ssh_runner.go:195] Run: systemctl --version
I1031 23:57:54.936279   22483 main.go:141] libmachine: (functional-736766) Calling .GetSSHHostname
I1031 23:57:54.940759   22483 main.go:141] libmachine: (functional-736766) DBG | domain functional-736766 has defined MAC address 52:54:00:7c:ed:1f in network mk-functional-736766
I1031 23:57:54.941280   22483 main.go:141] libmachine: (functional-736766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ed:1f", ip: ""} in network mk-functional-736766: {Iface:virbr1 ExpiryTime:2023-11-01 00:55:05 +0000 UTC Type:0 Mac:52:54:00:7c:ed:1f Iaid: IPaddr:192.168.50.189 Prefix:24 Hostname:functional-736766 Clientid:01:52:54:00:7c:ed:1f}
I1031 23:57:54.941329   22483 main.go:141] libmachine: (functional-736766) DBG | domain functional-736766 has defined IP address 192.168.50.189 and MAC address 52:54:00:7c:ed:1f in network mk-functional-736766
I1031 23:57:54.941483   22483 main.go:141] libmachine: (functional-736766) Calling .GetSSHPort
I1031 23:57:54.941701   22483 main.go:141] libmachine: (functional-736766) Calling .GetSSHKeyPath
I1031 23:57:54.941882   22483 main.go:141] libmachine: (functional-736766) Calling .GetSSHUsername
I1031 23:57:54.942145   22483 sshutil.go:53] new ssh client: &{IP:192.168.50.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/functional-736766/id_rsa Username:docker}
I1031 23:57:55.033944   22483 ssh_runner.go:195] Run: sudo crictl images --output json
I1031 23:57:55.084931   22483 main.go:141] libmachine: Making call to close driver server
I1031 23:57:55.084949   22483 main.go:141] libmachine: (functional-736766) Calling .Close
I1031 23:57:55.085292   22483 main.go:141] libmachine: Successfully made call to close driver server
I1031 23:57:55.085300   22483 main.go:141] libmachine: (functional-736766) DBG | Closing plugin on server side
I1031 23:57:55.085335   22483 main.go:141] libmachine: Making call to close connection to plugin binary
I1031 23:57:55.085348   22483 main.go:141] libmachine: Making call to close driver server
I1031 23:57:55.085359   22483 main.go:141] libmachine: (functional-736766) Calling .Close
I1031 23:57:55.085598   22483 main.go:141] libmachine: (functional-736766) DBG | Closing plugin on server side
I1031 23:57:55.085639   22483 main.go:141] libmachine: Successfully made call to close driver server
I1031 23:57:55.085654   22483 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (8.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-736766 ssh pgrep buildkitd: exit status 1 (207.086063ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 image build -t localhost/my-image:functional-736766 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-736766 image build -t localhost/my-image:functional-736766 testdata/build --alsologtostderr: (8.194152314s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-736766 image build -t localhost/my-image:functional-736766 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 2590a0ac179
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-736766
--> 8b92caa1c4d
Successfully tagged localhost/my-image:functional-736766
8b92caa1c4d4d086c9576f8c2ad0f8403995a2d8644e639fd7f944644cfa2fe8
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-736766 image build -t localhost/my-image:functional-736766 testdata/build --alsologtostderr:
I1031 23:57:55.352919   22536 out.go:296] Setting OutFile to fd 1 ...
I1031 23:57:55.353203   22536 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 23:57:55.353213   22536 out.go:309] Setting ErrFile to fd 2...
I1031 23:57:55.353217   22536 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 23:57:55.353387   22536 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7305/.minikube/bin
I1031 23:57:55.353964   22536 config.go:182] Loaded profile config "functional-736766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1031 23:57:55.354491   22536 config.go:182] Loaded profile config "functional-736766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1031 23:57:55.354870   22536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1031 23:57:55.354931   22536 main.go:141] libmachine: Launching plugin server for driver kvm2
I1031 23:57:55.369790   22536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35107
I1031 23:57:55.370301   22536 main.go:141] libmachine: () Calling .GetVersion
I1031 23:57:55.370947   22536 main.go:141] libmachine: Using API Version  1
I1031 23:57:55.370980   22536 main.go:141] libmachine: () Calling .SetConfigRaw
I1031 23:57:55.371379   22536 main.go:141] libmachine: () Calling .GetMachineName
I1031 23:57:55.371637   22536 main.go:141] libmachine: (functional-736766) Calling .GetState
I1031 23:57:55.373811   22536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1031 23:57:55.373863   22536 main.go:141] libmachine: Launching plugin server for driver kvm2
I1031 23:57:55.389103   22536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46441
I1031 23:57:55.389555   22536 main.go:141] libmachine: () Calling .GetVersion
I1031 23:57:55.389975   22536 main.go:141] libmachine: Using API Version  1
I1031 23:57:55.389990   22536 main.go:141] libmachine: () Calling .SetConfigRaw
I1031 23:57:55.390282   22536 main.go:141] libmachine: () Calling .GetMachineName
I1031 23:57:55.390499   22536 main.go:141] libmachine: (functional-736766) Calling .DriverName
I1031 23:57:55.390715   22536 ssh_runner.go:195] Run: systemctl --version
I1031 23:57:55.390738   22536 main.go:141] libmachine: (functional-736766) Calling .GetSSHHostname
I1031 23:57:55.393640   22536 main.go:141] libmachine: (functional-736766) DBG | domain functional-736766 has defined MAC address 52:54:00:7c:ed:1f in network mk-functional-736766
I1031 23:57:55.394001   22536 main.go:141] libmachine: (functional-736766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ed:1f", ip: ""} in network mk-functional-736766: {Iface:virbr1 ExpiryTime:2023-11-01 00:55:05 +0000 UTC Type:0 Mac:52:54:00:7c:ed:1f Iaid: IPaddr:192.168.50.189 Prefix:24 Hostname:functional-736766 Clientid:01:52:54:00:7c:ed:1f}
I1031 23:57:55.394034   22536 main.go:141] libmachine: (functional-736766) DBG | domain functional-736766 has defined IP address 192.168.50.189 and MAC address 52:54:00:7c:ed:1f in network mk-functional-736766
I1031 23:57:55.394169   22536 main.go:141] libmachine: (functional-736766) Calling .GetSSHPort
I1031 23:57:55.394355   22536 main.go:141] libmachine: (functional-736766) Calling .GetSSHKeyPath
I1031 23:57:55.394514   22536 main.go:141] libmachine: (functional-736766) Calling .GetSSHUsername
I1031 23:57:55.394654   22536 sshutil.go:53] new ssh client: &{IP:192.168.50.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/functional-736766/id_rsa Username:docker}
I1031 23:57:55.487871   22536 build_images.go:151] Building image from path: /tmp/build.1893243342.tar
I1031 23:57:55.487968   22536 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1031 23:57:55.498099   22536 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1893243342.tar
I1031 23:57:55.502602   22536 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1893243342.tar: stat -c "%s %y" /var/lib/minikube/build/build.1893243342.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1893243342.tar': No such file or directory
I1031 23:57:55.502635   22536 ssh_runner.go:362] scp /tmp/build.1893243342.tar --> /var/lib/minikube/build/build.1893243342.tar (3072 bytes)
I1031 23:57:55.526038   22536 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1893243342
I1031 23:57:55.534877   22536 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1893243342 -xf /var/lib/minikube/build/build.1893243342.tar
I1031 23:57:55.544394   22536 crio.go:297] Building image: /var/lib/minikube/build/build.1893243342
I1031 23:57:55.544456   22536 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-736766 /var/lib/minikube/build/build.1893243342 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1031 23:58:03.422949   22536 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-736766 /var/lib/minikube/build/build.1893243342 --cgroup-manager=cgroupfs: (7.878469075s)
I1031 23:58:03.423055   22536 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1893243342
I1031 23:58:03.444840   22536 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1893243342.tar
I1031 23:58:03.486568   22536 build_images.go:207] Built localhost/my-image:functional-736766 from /tmp/build.1893243342.tar
I1031 23:58:03.486600   22536 build_images.go:123] succeeded building to: functional-736766
I1031 23:58:03.486608   22536 build_images.go:124] failed building to: 
I1031 23:58:03.486632   22536 main.go:141] libmachine: Making call to close driver server
I1031 23:58:03.486645   22536 main.go:141] libmachine: (functional-736766) Calling .Close
I1031 23:58:03.486995   22536 main.go:141] libmachine: Successfully made call to close driver server
I1031 23:58:03.487034   22536 main.go:141] libmachine: (functional-736766) DBG | Closing plugin on server side
I1031 23:58:03.487037   22536 main.go:141] libmachine: Making call to close connection to plugin binary
I1031 23:58:03.487061   22536 main.go:141] libmachine: Making call to close driver server
I1031 23:58:03.487079   22536 main.go:141] libmachine: (functional-736766) Calling .Close
I1031 23:58:03.487289   22536 main.go:141] libmachine: Successfully made call to close driver server
I1031 23:58:03.487308   22536 main.go:141] libmachine: Making call to close connection to plugin binary
I1031 23:58:03.487328   22536 main.go:141] libmachine: (functional-736766) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 image ls
E1031 23:58:03.784407   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt: no such file or directory
2023/10/31 23:58:04 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (8.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.892753529s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-736766
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 image load --daemon gcr.io/google-containers/addon-resizer:functional-736766 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-736766 image load --daemon gcr.io/google-containers/addon-resizer:functional-736766 --alsologtostderr: (4.19041685s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "326.840196ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "71.290024ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 service list -o json
functional_test.go:1493: Took "426.435829ms" to run "out/minikube-linux-amd64 -p functional-736766 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "320.18919ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "62.277535ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.50.189:31681
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-736766 /tmp/TestFunctionalparallelMountCmdany-port462885827/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1698796649088133194" to /tmp/TestFunctionalparallelMountCmdany-port462885827/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1698796649088133194" to /tmp/TestFunctionalparallelMountCmdany-port462885827/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1698796649088133194" to /tmp/TestFunctionalparallelMountCmdany-port462885827/001/test-1698796649088133194
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-736766 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (312.253532ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 31 23:57 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 31 23:57 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 31 23:57 test-1698796649088133194
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 ssh cat /mount-9p/test-1698796649088133194
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-736766 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [231a91e8-519f-4ea5-860e-e512cff1dbfc] Pending
helpers_test.go:344: "busybox-mount" [231a91e8-519f-4ea5-860e-e512cff1dbfc] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [231a91e8-519f-4ea5-860e-e512cff1dbfc] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [231a91e8-519f-4ea5-860e-e512cff1dbfc] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.076039053s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-736766 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-736766 /tmp/TestFunctionalparallelMountCmdany-port462885827/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.50.189:31681
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.875514566s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-736766
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 image load --daemon gcr.io/google-containers/addon-resizer:functional-736766 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-736766 image load --daemon gcr.io/google-containers/addon-resizer:functional-736766 --alsologtostderr: (3.626773035s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 image save gcr.io/google-containers/addon-resizer:functional-736766 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-736766 image save gcr.io/google-containers/addon-resizer:functional-736766 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (2.035975559s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 image rm gcr.io/google-containers/addon-resizer:functional-736766 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-736766 /tmp/TestFunctionalparallelMountCmdspecific-port2607564283/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-736766 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (300.270319ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-736766 /tmp/TestFunctionalparallelMountCmdspecific-port2607564283/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-736766 ssh "sudo umount -f /mount-9p": exit status 1 (272.580266ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-736766 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-736766 /tmp/TestFunctionalparallelMountCmdspecific-port2607564283/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-736766 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3410025843/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-736766 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3410025843/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-736766 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3410025843/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-736766 ssh "findmnt -T" /mount1: exit status 1 (328.396247ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-736766 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-736766 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3410025843/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-736766 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3410025843/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-736766 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3410025843/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (8.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-736766
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-736766 image save --daemon gcr.io/google-containers/addon-resizer:functional-736766 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-736766 image save --daemon gcr.io/google-containers/addon-resizer:functional-736766 --alsologtostderr: (8.156284205s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-736766
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (8.19s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-736766
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-736766
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-736766
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (123.26s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-060181 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1031 23:58:07.626258   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt: no such file or directory
E1031 23:58:12.746817   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt: no such file or directory
E1031 23:58:22.987044   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt: no such file or directory
E1031 23:58:43.468040   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt: no such file or directory
E1031 23:59:24.428508   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-060181 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (2m3.257196862s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (123.26s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (25.01s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-060181 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-060181 addons enable ingress --alsologtostderr -v=5: (25.005253903s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (25.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.58s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-060181 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.58s)

                                                
                                    
x
+
TestJSONOutput/start/Command (62.39s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-761844 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E1101 00:03:37.929301   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/functional-736766/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-761844 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m2.389867736s)
--- PASS: TestJSONOutput/start/Command (62.39s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-761844 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-761844 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.1s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-761844 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-761844 --output=json --user=testUser: (7.10433331s)
--- PASS: TestJSONOutput/stop/Command (7.10s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-226232 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-226232 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (78.677333ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e96f72db-4aad-46de-8beb-9b39fc0a071a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-226232] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fca316a8-939c-4df6-af08-2c890dea1147","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17486"}}
	{"specversion":"1.0","id":"44ce8a82-3dad-4266-b2e3-9752a2fceaea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"658ca36b-b5ae-4a2c-9a8b-071b3387168c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17486-7305/kubeconfig"}}
	{"specversion":"1.0","id":"dd365ccb-4363-4b6d-9849-185ec3c35f45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7305/.minikube"}}
	{"specversion":"1.0","id":"4e1b6582-d502-47cf-bf17-3c27201e6e6c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"1ee399aa-cda8-40f0-877d-8b0c6edf676f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"823af989-bcfe-49ba-a810-3242bb39c8c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-226232" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-226232
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (95.74s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-967449 --driver=kvm2  --container-runtime=crio
E1101 00:04:59.850625   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/functional-736766/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-967449 --driver=kvm2  --container-runtime=crio: (46.478960667s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-970553 --driver=kvm2  --container-runtime=crio
E1101 00:05:35.091598   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt: no such file or directory
E1101 00:05:35.096892   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt: no such file or directory
E1101 00:05:35.107221   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt: no such file or directory
E1101 00:05:35.127520   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt: no such file or directory
E1101 00:05:35.167842   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt: no such file or directory
E1101 00:05:35.248190   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt: no such file or directory
E1101 00:05:35.408629   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt: no such file or directory
E1101 00:05:35.729229   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt: no such file or directory
E1101 00:05:36.370222   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt: no such file or directory
E1101 00:05:37.650814   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt: no such file or directory
E1101 00:05:40.212075   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt: no such file or directory
E1101 00:05:45.332427   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt: no such file or directory
E1101 00:05:55.573227   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt: no such file or directory
E1101 00:06:16.054406   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-970553 --driver=kvm2  --container-runtime=crio: (46.799104336s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-967449
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-970553
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-970553" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-970553
helpers_test.go:175: Cleaning up "first-967449" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-967449
--- PASS: TestMinikubeProfile (95.74s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.76s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-690840 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-690840 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.763014464s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-690840 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-690840 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.9s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-711158 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1101 00:06:57.015535   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt: no such file or directory
E1101 00:07:16.006085   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/functional-736766/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-711158 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.904556754s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-711158 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-711158 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-690840 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-711158 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-711158 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.15s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-711158
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-711158: (1.154682914s)
--- PASS: TestMountStart/serial/Stop (1.15s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.94s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-711158
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-711158: (21.94188134s)
--- PASS: TestMountStart/serial/RestartStopped (22.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-711158 ssh -- ls /minikube-host
E1101 00:07:43.691134   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/functional-736766/client.crt: no such file or directory
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-711158 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (110.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-600483 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1101 00:08:02.505709   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt: no such file or directory
E1101 00:08:18.935920   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-600483 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m50.512102122s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (110.93s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-600483 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-600483 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-600483 -- rollout status deployment/busybox: (4.579166239s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-600483 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-600483 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-600483 -- exec busybox-5bc68d56bd-6jjms -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-600483 -- exec busybox-5bc68d56bd-8pjvd -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-600483 -- exec busybox-5bc68d56bd-6jjms -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-600483 -- exec busybox-5bc68d56bd-8pjvd -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-600483 -- exec busybox-5bc68d56bd-6jjms -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-600483 -- exec busybox-5bc68d56bd-8pjvd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.41s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-600483 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-600483 -v 3 --alsologtostderr: (45.035824962s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.63s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 cp testdata/cp-test.txt multinode-600483:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 ssh -n multinode-600483 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 cp multinode-600483:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile219734398/001/cp-test_multinode-600483.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 ssh -n multinode-600483 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 cp multinode-600483:/home/docker/cp-test.txt multinode-600483-m02:/home/docker/cp-test_multinode-600483_multinode-600483-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 ssh -n multinode-600483 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 ssh -n multinode-600483-m02 "sudo cat /home/docker/cp-test_multinode-600483_multinode-600483-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 cp multinode-600483:/home/docker/cp-test.txt multinode-600483-m03:/home/docker/cp-test_multinode-600483_multinode-600483-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 ssh -n multinode-600483 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 ssh -n multinode-600483-m03 "sudo cat /home/docker/cp-test_multinode-600483_multinode-600483-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 cp testdata/cp-test.txt multinode-600483-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 ssh -n multinode-600483-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 cp multinode-600483-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile219734398/001/cp-test_multinode-600483-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 ssh -n multinode-600483-m02 "sudo cat /home/docker/cp-test.txt"
E1101 00:10:35.091535   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 cp multinode-600483-m02:/home/docker/cp-test.txt multinode-600483:/home/docker/cp-test_multinode-600483-m02_multinode-600483.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 ssh -n multinode-600483-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 ssh -n multinode-600483 "sudo cat /home/docker/cp-test_multinode-600483-m02_multinode-600483.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 cp multinode-600483-m02:/home/docker/cp-test.txt multinode-600483-m03:/home/docker/cp-test_multinode-600483-m02_multinode-600483-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 ssh -n multinode-600483-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 ssh -n multinode-600483-m03 "sudo cat /home/docker/cp-test_multinode-600483-m02_multinode-600483-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 cp testdata/cp-test.txt multinode-600483-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 ssh -n multinode-600483-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 cp multinode-600483-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile219734398/001/cp-test_multinode-600483-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 ssh -n multinode-600483-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 cp multinode-600483-m03:/home/docker/cp-test.txt multinode-600483:/home/docker/cp-test_multinode-600483-m03_multinode-600483.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 ssh -n multinode-600483-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 ssh -n multinode-600483 "sudo cat /home/docker/cp-test_multinode-600483-m03_multinode-600483.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 cp multinode-600483-m03:/home/docker/cp-test.txt multinode-600483-m02:/home/docker/cp-test_multinode-600483-m03_multinode-600483-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 ssh -n multinode-600483-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 ssh -n multinode-600483-m02 "sudo cat /home/docker/cp-test_multinode-600483-m03_multinode-600483-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.64s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-600483 node stop m03: (1.373413778s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-600483 status: exit status 7 (425.857037ms)

                                                
                                                
-- stdout --
	multinode-600483
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-600483-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-600483-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-600483 status --alsologtostderr: exit status 7 (441.466235ms)

                                                
                                                
-- stdout --
	multinode-600483
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-600483-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-600483-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 00:10:40.959867   29684 out.go:296] Setting OutFile to fd 1 ...
	I1101 00:10:40.960062   29684 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:10:40.960095   29684 out.go:309] Setting ErrFile to fd 2...
	I1101 00:10:40.960104   29684 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:10:40.960390   29684 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7305/.minikube/bin
	I1101 00:10:40.960617   29684 out.go:303] Setting JSON to false
	I1101 00:10:40.960660   29684 mustload.go:65] Loading cluster: multinode-600483
	I1101 00:10:40.960717   29684 notify.go:220] Checking for updates...
	I1101 00:10:40.961197   29684 config.go:182] Loaded profile config "multinode-600483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:10:40.961218   29684 status.go:255] checking status of multinode-600483 ...
	I1101 00:10:40.961782   29684 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1101 00:10:40.961832   29684 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:10:40.983529   29684 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46087
	I1101 00:10:40.984002   29684 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:10:40.984615   29684 main.go:141] libmachine: Using API Version  1
	I1101 00:10:40.984638   29684 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:10:40.985029   29684 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:10:40.985216   29684 main.go:141] libmachine: (multinode-600483) Calling .GetState
	I1101 00:10:40.986638   29684 status.go:330] multinode-600483 host status = "Running" (err=<nil>)
	I1101 00:10:40.986657   29684 host.go:66] Checking if "multinode-600483" exists ...
	I1101 00:10:40.986950   29684 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1101 00:10:40.986984   29684 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:10:41.001950   29684 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37537
	I1101 00:10:41.002333   29684 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:10:41.002817   29684 main.go:141] libmachine: Using API Version  1
	I1101 00:10:41.002837   29684 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:10:41.003119   29684 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:10:41.003398   29684 main.go:141] libmachine: (multinode-600483) Calling .GetIP
	I1101 00:10:41.006227   29684 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:10:41.006614   29684 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:10:41.006652   29684 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:10:41.006765   29684 host.go:66] Checking if "multinode-600483" exists ...
	I1101 00:10:41.007185   29684 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1101 00:10:41.007234   29684 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:10:41.022570   29684 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44377
	I1101 00:10:41.023004   29684 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:10:41.023489   29684 main.go:141] libmachine: Using API Version  1
	I1101 00:10:41.023518   29684 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:10:41.023835   29684 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:10:41.024025   29684 main.go:141] libmachine: (multinode-600483) Calling .DriverName
	I1101 00:10:41.024236   29684 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 00:10:41.024262   29684 main.go:141] libmachine: (multinode-600483) Calling .GetSSHHostname
	I1101 00:10:41.027137   29684 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:10:41.027577   29684 main.go:141] libmachine: (multinode-600483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:59:53", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:08:00 +0000 UTC Type:0 Mac:52:54:00:80:59:53 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-600483 Clientid:01:52:54:00:80:59:53}
	I1101 00:10:41.027615   29684 main.go:141] libmachine: (multinode-600483) DBG | domain multinode-600483 has defined IP address 192.168.39.130 and MAC address 52:54:00:80:59:53 in network mk-multinode-600483
	I1101 00:10:41.027849   29684 main.go:141] libmachine: (multinode-600483) Calling .GetSSHPort
	I1101 00:10:41.028054   29684 main.go:141] libmachine: (multinode-600483) Calling .GetSSHKeyPath
	I1101 00:10:41.028199   29684 main.go:141] libmachine: (multinode-600483) Calling .GetSSHUsername
	I1101 00:10:41.028352   29684 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483/id_rsa Username:docker}
	I1101 00:10:41.112137   29684 ssh_runner.go:195] Run: systemctl --version
	I1101 00:10:41.118139   29684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 00:10:41.133690   29684 kubeconfig.go:92] found "multinode-600483" server: "https://192.168.39.130:8443"
	I1101 00:10:41.133724   29684 api_server.go:166] Checking apiserver status ...
	I1101 00:10:41.133780   29684 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:10:41.146248   29684 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1066/cgroup
	I1101 00:10:41.156485   29684 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod99a9cda13526c350638742a7c7b2ba52/crio-52df3596b4dbf5cbd15e7b446e5e8f49f1d8fba2c92717b82edea9d0c1323801"
	I1101 00:10:41.156562   29684 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod99a9cda13526c350638742a7c7b2ba52/crio-52df3596b4dbf5cbd15e7b446e5e8f49f1d8fba2c92717b82edea9d0c1323801/freezer.state
	I1101 00:10:41.166776   29684 api_server.go:204] freezer state: "THAWED"
	I1101 00:10:41.166804   29684 api_server.go:253] Checking apiserver healthz at https://192.168.39.130:8443/healthz ...
	I1101 00:10:41.171893   29684 api_server.go:279] https://192.168.39.130:8443/healthz returned 200:
	ok
	I1101 00:10:41.171920   29684 status.go:421] multinode-600483 apiserver status = Running (err=<nil>)
	I1101 00:10:41.171950   29684 status.go:257] multinode-600483 status: &{Name:multinode-600483 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 00:10:41.171971   29684 status.go:255] checking status of multinode-600483-m02 ...
	I1101 00:10:41.172275   29684 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1101 00:10:41.172325   29684 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:10:41.187129   29684 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41735
	I1101 00:10:41.187543   29684 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:10:41.187999   29684 main.go:141] libmachine: Using API Version  1
	I1101 00:10:41.188020   29684 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:10:41.188338   29684 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:10:41.188615   29684 main.go:141] libmachine: (multinode-600483-m02) Calling .GetState
	I1101 00:10:41.190209   29684 status.go:330] multinode-600483-m02 host status = "Running" (err=<nil>)
	I1101 00:10:41.190241   29684 host.go:66] Checking if "multinode-600483-m02" exists ...
	I1101 00:10:41.190620   29684 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1101 00:10:41.190663   29684 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:10:41.205305   29684 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41621
	I1101 00:10:41.205779   29684 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:10:41.206224   29684 main.go:141] libmachine: Using API Version  1
	I1101 00:10:41.206244   29684 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:10:41.206520   29684 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:10:41.206675   29684 main.go:141] libmachine: (multinode-600483-m02) Calling .GetIP
	I1101 00:10:41.209531   29684 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:10:41.209979   29684 main.go:141] libmachine: (multinode-600483-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cb:5d", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:09:06 +0000 UTC Type:0 Mac:52:54:00:07:cb:5d Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-600483-m02 Clientid:01:52:54:00:07:cb:5d}
	I1101 00:10:41.210007   29684 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:10:41.210190   29684 host.go:66] Checking if "multinode-600483-m02" exists ...
	I1101 00:10:41.210565   29684 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1101 00:10:41.210606   29684 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:10:41.225101   29684 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36811
	I1101 00:10:41.225573   29684 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:10:41.225992   29684 main.go:141] libmachine: Using API Version  1
	I1101 00:10:41.226017   29684 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:10:41.226295   29684 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:10:41.226470   29684 main.go:141] libmachine: (multinode-600483-m02) Calling .DriverName
	I1101 00:10:41.226717   29684 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 00:10:41.226746   29684 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHHostname
	I1101 00:10:41.229411   29684 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:10:41.229736   29684 main.go:141] libmachine: (multinode-600483-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cb:5d", ip: ""} in network mk-multinode-600483: {Iface:virbr1 ExpiryTime:2023-11-01 01:09:06 +0000 UTC Type:0 Mac:52:54:00:07:cb:5d Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-600483-m02 Clientid:01:52:54:00:07:cb:5d}
	I1101 00:10:41.229780   29684 main.go:141] libmachine: (multinode-600483-m02) DBG | domain multinode-600483-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:07:cb:5d in network mk-multinode-600483
	I1101 00:10:41.229894   29684 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHPort
	I1101 00:10:41.230102   29684 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHKeyPath
	I1101 00:10:41.230260   29684 main.go:141] libmachine: (multinode-600483-m02) Calling .GetSSHUsername
	I1101 00:10:41.230435   29684 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7305/.minikube/machines/multinode-600483-m02/id_rsa Username:docker}
	I1101 00:10:41.310990   29684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 00:10:41.323407   29684 status.go:257] multinode-600483-m02 status: &{Name:multinode-600483-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1101 00:10:41.323443   29684 status.go:255] checking status of multinode-600483-m03 ...
	I1101 00:10:41.323801   29684 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1101 00:10:41.323855   29684 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:10:41.339711   29684 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44133
	I1101 00:10:41.340172   29684 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:10:41.340623   29684 main.go:141] libmachine: Using API Version  1
	I1101 00:10:41.340646   29684 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:10:41.340990   29684 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:10:41.341195   29684 main.go:141] libmachine: (multinode-600483-m03) Calling .GetState
	I1101 00:10:41.342749   29684 status.go:330] multinode-600483-m03 host status = "Stopped" (err=<nil>)
	I1101 00:10:41.342765   29684 status.go:343] host is not running, skipping remaining checks
	I1101 00:10:41.342771   29684 status.go:257] multinode-600483-m03 status: &{Name:multinode-600483-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (30.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 node start m03 --alsologtostderr
E1101 00:11:02.777178   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt: no such file or directory
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-600483 node start m03 --alsologtostderr: (29.656382035s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (30.30s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-600483 node delete m03: (1.059214434s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.62s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (445.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-600483 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1101 00:25:35.092071   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt: no such file or directory
E1101 00:27:16.007048   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/functional-736766/client.crt: no such file or directory
E1101 00:28:02.504202   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt: no such file or directory
E1101 00:30:35.092103   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt: no such file or directory
E1101 00:31:05.551324   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt: no such file or directory
E1101 00:32:16.006916   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/functional-736766/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-600483 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m24.94772147s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-600483 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (445.51s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (50.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-600483
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-600483-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-600483-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (84.009907ms)

                                                
                                                
-- stdout --
	* [multinode-600483-m02] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17486
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17486-7305/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7305/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-600483-m02' is duplicated with machine name 'multinode-600483-m02' in profile 'multinode-600483'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-600483-m03 --driver=kvm2  --container-runtime=crio
E1101 00:33:02.504146   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-600483-m03 --driver=kvm2  --container-runtime=crio: (48.902359679s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-600483
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-600483: exit status 80 (251.952292ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-600483
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-600483-m03 already exists in multinode-600483-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-600483-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (50.10s)

                                                
                                    
x
+
TestScheduledStopUnix (116.74s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-904905 --memory=2048 --driver=kvm2  --container-runtime=crio
E1101 00:38:38.139692   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-904905 --memory=2048 --driver=kvm2  --container-runtime=crio: (44.931792565s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-904905 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-904905 -n scheduled-stop-904905
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-904905 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-904905 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-904905 -n scheduled-stop-904905
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-904905
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-904905 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-904905
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-904905: exit status 7 (85.542658ms)

                                                
                                                
-- stdout --
	scheduled-stop-904905
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-904905 -n scheduled-stop-904905
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-904905 -n scheduled-stop-904905: exit status 7 (75.68746ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-904905" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-904905
--- PASS: TestScheduledStopUnix (116.74s)

                                                
                                    
x
+
TestKubernetesUpgrade (185.9s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-346545 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-346545 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m8.771661413s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-346545
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-346545: (2.123955495s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-346545 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-346545 status --format={{.Host}}: exit status 7 (119.250883ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-346545 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-346545 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (54.794002121s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-346545 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-346545 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-346545 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (110.315777ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-346545] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17486
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17486-7305/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7305/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-346545
	    minikube start -p kubernetes-upgrade-346545 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3465452 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.3, by running:
	    
	    minikube start -p kubernetes-upgrade-346545 --kubernetes-version=v1.28.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-346545 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-346545 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (58.621738941s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-346545" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-346545
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-346545: (1.288628711s)
--- PASS: TestKubernetesUpgrade (185.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-345470 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-345470 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (90.738528ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-345470] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17486
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17486-7305/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7305/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (105.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-345470 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-345470 --driver=kvm2  --container-runtime=crio: (1m45.287111232s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-345470 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (105.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-345470 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-345470 --no-kubernetes --driver=kvm2  --container-runtime=crio: (8.004645471s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-345470 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-345470 status -o json: exit status 2 (296.746206ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-345470","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-345470
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-345470 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-345470 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.133060528s)
--- PASS: TestNoKubernetes/serial/Start (28.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-345470 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-345470 "sudo systemctl is-active --quiet service kubelet": exit status 1 (217.49768ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-345470
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-345470: (1.223326617s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (57.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-345470 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-345470 --driver=kvm2  --container-runtime=crio: (57.086942138s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (57.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-090856 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-090856 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (126.236518ms)

                                                
                                                
-- stdout --
	* [false-090856] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17486
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17486-7305/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7305/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 00:43:09.486011   40441 out.go:296] Setting OutFile to fd 1 ...
	I1101 00:43:09.486294   40441 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:43:09.486305   40441 out.go:309] Setting ErrFile to fd 2...
	I1101 00:43:09.486312   40441 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:43:09.486509   40441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7305/.minikube/bin
	I1101 00:43:09.487154   40441 out.go:303] Setting JSON to false
	I1101 00:43:09.488265   40441 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5135,"bootTime":1698794255,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 00:43:09.488333   40441 start.go:138] virtualization: kvm guest
	I1101 00:43:09.490776   40441 out.go:177] * [false-090856] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1101 00:43:09.492386   40441 out.go:177]   - MINIKUBE_LOCATION=17486
	I1101 00:43:09.492428   40441 notify.go:220] Checking for updates...
	I1101 00:43:09.493749   40441 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 00:43:09.495186   40441 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17486-7305/kubeconfig
	I1101 00:43:09.496525   40441 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7305/.minikube
	I1101 00:43:09.498076   40441 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 00:43:09.499586   40441 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 00:43:09.501653   40441 config.go:182] Loaded profile config "NoKubernetes-345470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1101 00:43:09.501817   40441 config.go:182] Loaded profile config "force-systemd-env-256488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:43:09.501946   40441 config.go:182] Loaded profile config "running-upgrade-411881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1101 00:43:09.502060   40441 driver.go:378] Setting default libvirt URI to qemu:///system
	I1101 00:43:09.543242   40441 out.go:177] * Using the kvm2 driver based on user configuration
	I1101 00:43:09.544629   40441 start.go:298] selected driver: kvm2
	I1101 00:43:09.544646   40441 start.go:902] validating driver "kvm2" against <nil>
	I1101 00:43:09.544662   40441 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 00:43:09.546936   40441 out.go:177] 
	W1101 00:43:09.548345   40441 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1101 00:43:09.549731   40441 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-090856 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-090856

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-090856

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-090856

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-090856

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-090856

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-090856

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-090856

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-090856

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-090856

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-090856

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-090856"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-090856"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-090856"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-090856

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-090856"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-090856"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-090856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-090856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-090856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-090856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-090856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-090856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-090856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-090856" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-090856"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-090856"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-090856"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-090856"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-090856"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-090856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-090856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-090856" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-090856"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-090856"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-090856"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-090856"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-090856"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-090856

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-090856"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-090856"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-090856"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-090856"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-090856"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-090856"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-090856"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-090856"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-090856"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-090856"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-090856"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-090856"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-090856"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-090856"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-090856"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-090856"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-090856"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-090856"

                                                
                                                
----------------------- debugLogs end: false-090856 [took: 3.502967888s] --------------------------------
helpers_test.go:175: Cleaning up "false-090856" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-090856
--- PASS: TestNetworkPlugins/group/false (3.81s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.67s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-345470 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-345470 "sudo systemctl is-active --quiet service kubelet": exit status 1 (235.119482ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestPause/serial/Start (69.57s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-582989 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-582989 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m9.5663298s)
--- PASS: TestPause/serial/Start (69.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (63.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-090856 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E1101 00:45:35.091971   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-090856 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m3.267171087s)
--- PASS: TestNetworkPlugins/group/auto/Start (63.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-090856 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-090856 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-wxs6m" [4e4260ab-d2a0-435c-ae6f-544c110619c9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-wxs6m" [4e4260ab-d2a0-435c-ae6f-544c110619c9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.011670711s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-090856 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-090856 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-090856 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (74.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-090856 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-090856 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m14.227960542s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (74.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (110.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-090856 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-090856 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m50.524701616s)
--- PASS: TestNetworkPlugins/group/calico/Start (110.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-xkphw" [25439ccf-148a-4a63-b11e-9e93541a28d9] Running
E1101 00:47:45.551564   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.027385268s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-090856 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-090856 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-b9cfg" [395f3323-5a11-4c17-8d1e-ff023e23b109] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-b9cfg" [395f3323-5a11-4c17-8d1e-ff023e23b109] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.01278518s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-090856 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-090856 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-090856 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (102.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-090856 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-090856 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m42.693247362s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (102.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (131.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-090856 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-090856 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (2m11.063079733s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (131.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-886496
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (143.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-090856 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-090856 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (2m23.081847141s)
--- PASS: TestNetworkPlugins/group/flannel/Start (143.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-tz9dk" [08b8d84e-cb91-4323-a642-2a30e60e331a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.028631426s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-090856 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-090856 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-g2tjm" [0a42beb8-93bb-4770-b3e9-e97ded63f171] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-g2tjm" [0a42beb8-93bb-4770-b3e9-e97ded63f171] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.016031736s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-090856 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-090856 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-090856 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (103.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-090856 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-090856 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m43.897691306s)
--- PASS: TestNetworkPlugins/group/bridge/Start (103.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-090856 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-090856 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5bnhp" [dd46162e-24f5-4508-9f73-bb9bf3b7533b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-5bnhp" [dd46162e-24f5-4508-9f73-bb9bf3b7533b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.014448404s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-090856 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-090856 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-090856 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (141.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-330042 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-330042 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (2m21.914715357s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (141.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-090856 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-090856 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-wlzfz" [3a16610b-1825-4f5b-8c34-90f9c24ee637] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1101 00:50:35.092040   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-wlzfz" [3a16610b-1825-4f5b-8c34-90f9c24ee637] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.012850964s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-090856 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-090856 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-090856 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-7rwlf" [71936ea4-15bd-47a9-b818-2e9fbad188a8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.023955326s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-090856 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (15.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-090856 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rc8bf" [0cb0f9d3-46df-432b-aff2-f7cbd61d6206] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-rc8bf" [0cb0f9d3-46df-432b-aff2-f7cbd61d6206] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 15.022015799s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (15.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-090856 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (15.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-090856 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hjwl8" [59f5bd82-bc25-47a5-9616-a196ad950a3d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hjwl8" [59f5bd82-bc25-47a5-9616-a196ad950a3d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 15.016079954s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (15.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (87.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-008483 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-008483 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (1m27.295183762s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (87.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-090856 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-090856 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-090856 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-090856 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-090856 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-090856 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)
E1101 01:21:14.121744   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/auto-090856/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (68.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-754132 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-754132 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (1m8.167608523s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (68.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (132.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-639310 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
E1101 00:51:34.605267   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/auto-090856/client.crt: no such file or directory
E1101 00:51:55.085617   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/auto-090856/client.crt: no such file or directory
E1101 00:51:59.053184   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/functional-736766/client.crt: no such file or directory
E1101 00:52:16.006109   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/functional-736766/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-639310 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (2m12.992056192s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (132.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-008483 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c8c3fe96-ad03-4e30-9acb-3e5991bdc9d1] Pending
helpers_test.go:344: "busybox" [c8c3fe96-ad03-4e30-9acb-3e5991bdc9d1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c8c3fe96-ad03-4e30-9acb-3e5991bdc9d1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.054026904s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-008483 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-754132 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8cfed257-1153-4490-a24e-267d12507244] Pending
helpers_test.go:344: "busybox" [8cfed257-1153-4490-a24e-267d12507244] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1101 00:52:36.046621   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/auto-090856/client.crt: no such file or directory
helpers_test.go:344: "busybox" [8cfed257-1153-4490-a24e-267d12507244] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.033912673s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-754132 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-008483 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1101 00:52:43.437919   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/kindnet-090856/client.crt: no such file or directory
E1101 00:52:43.443236   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/kindnet-090856/client.crt: no such file or directory
E1101 00:52:43.453568   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/kindnet-090856/client.crt: no such file or directory
E1101 00:52:43.473963   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/kindnet-090856/client.crt: no such file or directory
E1101 00:52:43.514311   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/kindnet-090856/client.crt: no such file or directory
E1101 00:52:43.594724   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/kindnet-090856/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-008483 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.187103429s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-008483 describe deploy/metrics-server -n kube-system
E1101 00:52:43.754997   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/kindnet-090856/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-330042 create -f testdata/busybox.yaml
E1101 00:52:44.715790   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/kindnet-090856/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [03044889-333d-47b1-9fc7-a157de7c34b2] Pending
helpers_test.go:344: "busybox" [03044889-333d-47b1-9fc7-a157de7c34b2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [03044889-333d-47b1-9fc7-a157de7c34b2] Running
E1101 00:52:53.678418   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/kindnet-090856/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.046848276s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-330042 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-754132 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1101 00:52:45.996635   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/kindnet-090856/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-754132 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.11110518s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-754132 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-330042 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-330042 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-639310 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0ba45803-c340-4187-9fbb-eac8c39d7f9b] Pending
helpers_test.go:344: "busybox" [0ba45803-c340-4187-9fbb-eac8c39d7f9b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1101 00:53:42.689210   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/calico-090856/client.crt: no such file or directory
helpers_test.go:344: "busybox" [0ba45803-c340-4187-9fbb-eac8c39d7f9b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.029424081s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-639310 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-639310 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1101 00:53:52.929860   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/calico-090856/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-639310 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.099010038s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-639310 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (698.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-008483 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
E1101 00:55:18.140055   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-008483 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (11m37.889102839s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-008483 -n no-preload-008483
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (698.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (623.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-754132 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-754132 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (10m22.965511225s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-754132 -n embed-certs-754132
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (623.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (729.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-330042 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
E1101 00:55:30.316911   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/enable-default-cni-090856/client.crt: no such file or directory
E1101 00:55:30.322186   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/enable-default-cni-090856/client.crt: no such file or directory
E1101 00:55:30.332524   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/enable-default-cni-090856/client.crt: no such file or directory
E1101 00:55:30.352872   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/enable-default-cni-090856/client.crt: no such file or directory
E1101 00:55:30.393196   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/enable-default-cni-090856/client.crt: no such file or directory
E1101 00:55:30.473581   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/enable-default-cni-090856/client.crt: no such file or directory
E1101 00:55:30.634070   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/enable-default-cni-090856/client.crt: no such file or directory
E1101 00:55:30.954696   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/enable-default-cni-090856/client.crt: no such file or directory
E1101 00:55:31.595115   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/enable-default-cni-090856/client.crt: no such file or directory
E1101 00:55:32.021528   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/custom-flannel-090856/client.crt: no such file or directory
E1101 00:55:32.875269   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/enable-default-cni-090856/client.crt: no such file or directory
E1101 00:55:35.091506   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt: no such file or directory
E1101 00:55:35.436060   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/enable-default-cni-090856/client.crt: no such file or directory
E1101 00:55:40.556845   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/enable-default-cni-090856/client.crt: no such file or directory
E1101 00:55:47.920225   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/flannel-090856/client.crt: no such file or directory
E1101 00:55:47.925561   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/flannel-090856/client.crt: no such file or directory
E1101 00:55:47.935894   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/flannel-090856/client.crt: no such file or directory
E1101 00:55:47.956243   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/flannel-090856/client.crt: no such file or directory
E1101 00:55:47.996639   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/flannel-090856/client.crt: no such file or directory
E1101 00:55:48.076882   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/flannel-090856/client.crt: no such file or directory
E1101 00:55:48.237264   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/flannel-090856/client.crt: no such file or directory
E1101 00:55:48.557847   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/flannel-090856/client.crt: no such file or directory
E1101 00:55:49.198594   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/flannel-090856/client.crt: no such file or directory
E1101 00:55:50.478885   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/flannel-090856/client.crt: no such file or directory
E1101 00:55:50.797322   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/enable-default-cni-090856/client.crt: no such file or directory
E1101 00:55:52.797991   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/bridge-090856/client.crt: no such file or directory
E1101 00:55:52.803332   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/bridge-090856/client.crt: no such file or directory
E1101 00:55:52.813628   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/bridge-090856/client.crt: no such file or directory
E1101 00:55:52.834034   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/bridge-090856/client.crt: no such file or directory
E1101 00:55:52.874371   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/bridge-090856/client.crt: no such file or directory
E1101 00:55:52.954739   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/bridge-090856/client.crt: no such file or directory
E1101 00:55:53.039855   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/flannel-090856/client.crt: no such file or directory
E1101 00:55:53.115123   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/bridge-090856/client.crt: no such file or directory
E1101 00:55:53.435734   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/bridge-090856/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-330042 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (12m9.301814659s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-330042 -n old-k8s-version-330042
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (729.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (599.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-639310 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
E1101 00:56:28.881828   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/flannel-090856/client.crt: no such file or directory
E1101 00:56:33.760351   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/bridge-090856/client.crt: no such file or directory
E1101 00:56:41.808058   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/auto-090856/client.crt: no such file or directory
E1101 00:56:52.239423   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/enable-default-cni-090856/client.crt: no such file or directory
E1101 00:57:09.842433   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/flannel-090856/client.crt: no such file or directory
E1101 00:57:14.720792   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/bridge-090856/client.crt: no such file or directory
E1101 00:57:16.006982   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/functional-736766/client.crt: no such file or directory
E1101 00:57:34.902852   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/custom-flannel-090856/client.crt: no such file or directory
E1101 00:57:43.437352   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/kindnet-090856/client.crt: no such file or directory
E1101 00:58:02.504280   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt: no such file or directory
E1101 00:58:11.122246   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/kindnet-090856/client.crt: no such file or directory
E1101 00:58:14.160531   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/enable-default-cni-090856/client.crt: no such file or directory
E1101 00:58:31.763266   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/flannel-090856/client.crt: no such file or directory
E1101 00:58:32.448137   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/calico-090856/client.crt: no such file or directory
E1101 00:58:36.641766   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/bridge-090856/client.crt: no such file or directory
E1101 00:59:00.131980   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/calico-090856/client.crt: no such file or directory
E1101 00:59:51.060267   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/custom-flannel-090856/client.crt: no such file or directory
E1101 01:00:18.743022   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/custom-flannel-090856/client.crt: no such file or directory
E1101 01:00:30.317176   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/enable-default-cni-090856/client.crt: no such file or directory
E1101 01:00:35.091642   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt: no such file or directory
E1101 01:00:47.920733   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/flannel-090856/client.crt: no such file or directory
E1101 01:00:52.798236   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/bridge-090856/client.crt: no such file or directory
E1101 01:00:58.001724   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/enable-default-cni-090856/client.crt: no such file or directory
E1101 01:01:14.121725   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/auto-090856/client.crt: no such file or directory
E1101 01:01:15.603828   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/flannel-090856/client.crt: no such file or directory
E1101 01:01:20.482429   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/bridge-090856/client.crt: no such file or directory
E1101 01:02:16.006925   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/functional-736766/client.crt: no such file or directory
E1101 01:02:43.437486   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/kindnet-090856/client.crt: no such file or directory
E1101 01:03:02.504525   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt: no such file or directory
E1101 01:03:32.448106   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/calico-090856/client.crt: no such file or directory
E1101 01:04:25.552495   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/addons-798361/client.crt: no such file or directory
E1101 01:04:51.060319   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/custom-flannel-090856/client.crt: no such file or directory
E1101 01:05:30.317243   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/enable-default-cni-090856/client.crt: no such file or directory
E1101 01:05:35.092019   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-639310 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (9m58.845884976s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-639310 -n default-k8s-diff-port-639310
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (599.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (59.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-816754 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
E1101 01:20:30.317002   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/enable-default-cni-090856/client.crt: no such file or directory
E1101 01:20:35.091568   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/ingress-addon-legacy-060181/client.crt: no such file or directory
E1101 01:20:47.920079   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/flannel-090856/client.crt: no such file or directory
E1101 01:20:52.798385   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/bridge-090856/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-816754 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (59.457060821s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (59.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-816754 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-816754 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.616251608s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-816754 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-816754 --alsologtostderr -v=3: (10.428930378s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-816754 -n newest-cni-816754
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-816754 -n newest-cni-816754: exit status 7 (85.040014ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-816754 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (48.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-816754 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
E1101 01:22:16.006893   14504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/functional-736766/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-816754 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (48.224232904s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-816754 -n newest-cni-816754
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (48.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-816754 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-816754 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-816754 -n newest-cni-816754
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-816754 -n newest-cni-816754: exit status 2 (261.899202ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-816754 -n newest-cni-816754
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-816754 -n newest-cni-816754: exit status 2 (245.229219ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-816754 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-816754 -n newest-cni-816754
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-816754 -n newest-cni-816754
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.50s)

                                                
                                    

Test skip (36/292)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.28.3/cached-images 0
13 TestDownloadOnly/v1.28.3/binaries 0
14 TestDownloadOnly/v1.28.3/kubectl 0
18 TestDownloadOnlyKic 0
32 TestAddons/parallel/Olm 0
44 TestDockerFlags 0
47 TestDockerEnvContainerd 0
49 TestHyperKitDriverInstallOrUpdate 0
50 TestHyperkitDriverSkipUpgrade 0
101 TestFunctional/parallel/DockerEnv 0
102 TestFunctional/parallel/PodmanEnv 0
110 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
111 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
112 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
113 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
114 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
115 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
116 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
117 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
150 TestGvisorAddon 0
151 TestImageBuild 0
184 TestKicCustomNetwork 0
185 TestKicExistingNetwork 0
186 TestKicCustomSubnet 0
187 TestKicStaticIP 0
218 TestChangeNoneUser 0
221 TestScheduledStopWindows 0
223 TestSkaffold 0
225 TestInsufficientStorage 0
229 TestMissingContainerUpgrade 0
241 TestNetworkPlugins/group/kubenet 4.11
249 TestNetworkPlugins/group/cilium 4.2
258 TestStartStop/group/disable-driver-mounts 0.16
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-090856 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-090856

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-090856

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-090856

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-090856

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-090856

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-090856

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-090856

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-090856

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-090856

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-090856

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-090856"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-090856"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-090856"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-090856

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-090856"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-090856"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-090856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-090856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-090856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-090856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-090856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-090856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-090856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-090856" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-090856"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-090856"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-090856"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-090856"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-090856"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-090856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-090856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-090856" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-090856"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-090856"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-090856"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-090856"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-090856"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-090856

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-090856"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-090856"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-090856"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-090856"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-090856"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-090856"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-090856"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-090856"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-090856"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-090856"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-090856"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-090856"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-090856"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-090856"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-090856"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-090856"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-090856"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-090856"

                                                
                                                
----------------------- debugLogs end: kubenet-090856 [took: 3.923749834s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-090856" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-090856
--- SKIP: TestNetworkPlugins/group/kubenet (4.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-090856 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-090856

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-090856

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-090856

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-090856

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-090856

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-090856

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-090856

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-090856

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-090856

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-090856

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-090856"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-090856"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-090856"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-090856

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-090856"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-090856"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-090856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-090856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-090856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-090856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-090856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-090856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-090856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-090856" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-090856"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-090856"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-090856"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-090856"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-090856"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-090856

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-090856

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-090856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-090856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-090856

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-090856

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-090856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-090856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-090856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-090856" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-090856" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-090856"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-090856"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-090856"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-090856"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-090856"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17486-7305/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 01 Nov 2023 00:43:15 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0-beta.0
name: cluster_info
server: https://192.168.61.217:8443
name: force-systemd-env-256488
contexts:
- context:
cluster: force-systemd-env-256488
extensions:
- extension:
last-update: Wed, 01 Nov 2023 00:43:15 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0-beta.0
name: context_info
namespace: default
user: force-systemd-env-256488
name: force-systemd-env-256488
current-context: force-systemd-env-256488
kind: Config
preferences: {}
users:
- name: force-systemd-env-256488
user:
client-certificate: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/force-systemd-env-256488/client.crt
client-key: /home/jenkins/minikube-integration/17486-7305/.minikube/profiles/force-systemd-env-256488/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-090856

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-090856"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-090856"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-090856"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-090856"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-090856"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-090856"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-090856"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-090856"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-090856"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-090856"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-090856"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-090856"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-090856"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-090856"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-090856"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-090856"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-090856"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-090856" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-090856"

                                                
                                                
----------------------- debugLogs end: cilium-090856 [took: 4.032581767s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-090856" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-090856
--- SKIP: TestNetworkPlugins/group/cilium (4.20s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-130996" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-130996
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard